Next Article in Journal
Wearable-Based Stair Climb Power Estimation and Activity Classification
Next Article in Special Issue
Ultraviolet-C Photoresponsivity Using Fabricated TiO2 Thin Films and Transimpedance-Amplifier-Based Test Setup
Previous Article in Journal
SiamOT: An Improved Siamese Network with Online Training for Visual Tracking
Previous Article in Special Issue
The Method of Evaluation of Radio Altimeter Methodological Error in Laboratory Environment
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Deep Learning Approach to Classify Sitting and Sleep History from Raw Accelerometry Data during Simulated Driving

by
Georgia A. Tuckwell
1,*,
James A. Keal
2,
Charlotte C. Gupta
1,
Sally A. Ferguson
1,
Jarrad D. Kowlessar
3 and
Grace E. Vincent
1
1
School of Health, Medical and Applied Sciences, Central Queensland University, Adelaide 5001, Australia
2
School of Physical Sciences, The University of Adelaide, Adelaide 5005, Australia
3
College of Humanities and Social Sciences, Flinders University, Adelaide 5005, Australia
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(17), 6598; https://doi.org/10.3390/s22176598
Submission received: 19 July 2022 / Revised: 17 August 2022 / Accepted: 29 August 2022 / Published: 1 September 2022
(This article belongs to the Special Issue Women’s Special Issue Series: Sensors)

Abstract

:
Prolonged sitting and inadequate sleep can impact driving performance. Therefore, objective knowledge of a driver’s recent sitting and sleep history could help reduce safety risks. This study aimed to apply deep learning to raw accelerometry data collected during a simulated driving task to classify recent sitting and sleep history. Participants (n = 84, Mean ± SD age = 23.5 ± 4.8, 49% Female) completed a seven-day laboratory study. Raw accelerometry data were collected from a thigh-worn accelerometer during a 20-min simulated drive (8:10 h and 17:30 h each day). Two convolutional neural networks (CNNs; ResNet-18 and DixonNet) were trained to classify accelerometry data into four classes (sitting or breaking up sitting and 9-h or 5-h sleep). Accuracy was determined using five-fold cross-validation. ResNet-18 produced higher accuracy scores: 88.6 ± 1.3% for activity (compared to 77.2 ± 2.6% from DixonNet) and 88.6 ± 1.1% for sleep history (compared to 75.2 ± 2.6% from DixonNet). Class activation mapping revealed distinct patterns of movement and postural changes between classes. Findings demonstrate the suitability of CNNs in classifying sitting and sleep history using thigh-worn accelerometer data collected during a simulated drive. This approach has implications for the identification of drivers at risk of fatigue-related impairment.

1. Introduction

Safety-critical tasks such as driving require a range of skills and competencies that must be performed at a high level [1,2,3]. Psychomotor functions required for driving are known to be impacted by inadequate sleep [4]. Specifically, reduced psychomotor functioning as a result of inadequate sleep manifests in impaired reaction time and precise motor responses [5,6,7]. With specific reference to driving, research has reported delayed response times in drivers with inadequate sleep, with drivers taking up to 44% longer to start braking [8]. Drowsy or fatigued driving represents a significant risk to driver safety, with 20% of fatalities globally reported to be fatigue-related [5,8]. Importantly, impairments in psychomotor functioning, like those seen following inadequate sleep, have also been reported following periods of prolonged sitting [4,9,10,11,12]. Prolonged sitting was associated with longer reaction times when compared to breaking up sitting with physical activity [13]. Given that both inadequate sleep and prolonged sitting can impact driving performance, the objective assessment of sitting and sleep history prior to or in the early stages of a drive may provide an avenue to reduce the risk of road accidents.
Recent developments in deep learning for driver fatigue detection have relied on computer vision (information derived from images) using driver-facing cameras [14]. Recordings of drivers have assessed physiological and behavioural features such as pupil dilation, yawning and blinking [14,15,16,17,18] and demonstrated a high degree of accuracy for driver fatigue detection [17,19,20]. Computer vision approaches rely primarily on facial feature detection, but they provide no information about vehicle control or performance (e.g., speed and lane variability) [21]. An analysis of driver behaviours at the interface with vehicle controls could provide additional insights into the relationship between fatigue and driving performance.
Driver fatigue detection approaches have also utilised both physiological- and vehicle-based measures [15]. Physiological-based measures for driver fatigue detection include brain activity (i.e., ocular activity, muscle tone, cardiac activity) and respiration rate [22,23,24,25,26]. Current vehicle-based measures for driver fatigue detection include steering wheel angle and lane deviation analysis [19]. One study, which integrated physiological signals (respiratory signals and pulse rate) with vehicle-based data (steering wheel angle), using a support vector machine approach, classified driver drowsiness with an accuracy of up to 96% [27]. While effective, the current physiological driver fatigue detection methods can be expensive and require labour-intensive data processing [7,23]. A single signal that captures the physical movement during driving may be a more cost-effective means of detecting fatigue. The assessment of physical movement may also provide information about how movement changes with fatigue and how such changes impact driving performance. Critically, the assessment of physical movement may also be used to classify sitting history, which impacts aspects of performance relevant to driving but currently has an unknown relationship to driving performance. A low-cost body-worn device, such as an accelerometer, may provide a means to capture a movement signal that can be used as an indicator of driver state.
Body-worn sensors, such as triaxial accelerometers (i.e., three axes: x, y, z), can collect high-resolution physical movement data (e.g., 20 samples per second) [28]. While the physical movements required during driving (steering, braking and accelerating events) can be recorded using such accelerometers [29], detecting subtle and systematic changes in movement requires a computationally efficient methodology that can recognise and extract patterns within large data sets. Previous research has demonstrated the value of a deep learning approach to interrogate raw accelerometry data. Recently, Dixon et al. [30] demonstrated the use of a convolutional neural network (CNN) to detect subtle patterns within accelerometer data. Dixon et al. [30] used data from body-worn accelerometers to classify three different terrain surfaces traversed at running pace (concrete, synthetic and woodchip) with an accuracy of up to 97%. An accelerometer placed on the thigh of a driver will capture leg movements as they relate to the pedal inputs (acceleration and brake) of the vehicle control. This may be sufficient to detect subtle patterns related to sitting and sleep history in drivers using a CNN approach.
A CNN is a deep learning architecture inspired by the visual cortex of an animal [31]. The networks are designed to automatically extract features in the form of translation-invariant spatial relationships [32], and thus, they excel at pattern detection in large datasets of spatially-dependent variables, such as images [32,33,34]. Automatic feature extraction is an important consideration for datasets in which the pertinent signal features (i.e., changes in measured acceleration indicative of driver state) are unknown. CNNs are used ubiquitously in supervised machine learning to differentiate between previously defined classes in data (i.e., conditions: sitting/breaking up sitting and sleep history) [33]. Further, the post-hoc analysis of a trained CNN model, such as class activation mapping (CAM), can be used to identify and visually present the learned features which have been proven as keys to class differentiation [35,36].
The primary aim of this study was to apply deep learning to raw accelerometry data collected during a simulated driving task to classify the recent sitting and sleep history of the driver. A secondary aim was to use the post-hoc analysis of the trained models to characterise differences in physical movement between the two classes (i.e., sitting and sleep history). This analysis may provide the basis for a novel, accessible and more feasible method of detecting at-risk drivers through physical movement relating to vehicle control.

2. Materials and Methods

2.1. Study Design

The approach takes four phases: experimental study (data collection), CNN training, CNN evaluation and class activation mapping (CAM). The study was a laboratory-based, randomised, between-subjects, factorial (2 × 2) experimental design. Participants were randomly allocated to one of four conditions, consisting of a sleep opportunity condition of 9-h or 5-h (time in bed) and an activity condition of sitting or breaking up sitting (further detail on conditions provided below). Participants lived at the Appleton Institute Sleep Laboratory for 7 days (1 Adaptation Day, 5 Experimental Days and 1 Recovery Day). This study was approved by the Central Queensland University Human Research Ethics Committee (0000021914) and registered with the Australian New Zealand Clinical Trials Registry (12619001516178). Data from this study form part of a larger study, and the complete study protocol is published elsewhere [37].

2.2. Participants

Healthy adult males (n = 43) and females (n = 41) were recruited from the Adelaide region (Australia). Inclusion criteria were: being aged between 18-35 years, having a body mass index (BMI) range between 18-30 kg/m2, being a non-smoker, not currently being a shift-worker and not being diagnosed with a sleep disorder. Inclusion criteria were based on previous studies investigating cognitive impacts of breaking up sitting [1,2,37,38,39,40,41] to control for age and health-related factors which may have influenced outcomes. Participants were screened using a number of standard sleep, general health and physical activity questionnaires, with further information available here [37]. Participants were randomly assigned to one of four experimental conditions: (a) sitting and 9-h sleep opportunity, (b) sitting and 5-h sleep opportunity, (c) breaking up sitting and 9-h sleep opportunity or (d) breaking up sitting and 5-h sleep opportunity (See Table 1). During the experimental days (09:00-h–17:30-h), participants allocated to breaking up sitting conditions walked on a treadmill for 3 min every 30 min at a walking speed of 3.2 km/h. Participants allocated to sitting conditions remained seated during this time. All participants were provided with a 9-h sleep opportunity on the first night (Adaptation) and last night (Recovery) (22:00-h to 07:00-h). On the experimental nights, participants were assigned to one of two sleep opportunities: 9-h sleep opportunity (22:00-h–07:00-h) or 5-h sleep opportunity (02:00-h–07:00-h). Participants provided written informed consent and were compensated financially for their time at the conclusion of the study (AUD$780).

2.3. Experimental Procedure

The participants lived in a sound-attenuated and temperature-controlled (21 ± 2 °C) sleep laboratory for seven consecutive days and nights. During the day, light levels were kept at >100 lux, reflecting typical daily indoor light levels. On the adaptation day, participants were familiarised with all relevant questionnaires and all components of the driving task battery. On experimental days (E1–E5), participants completed a simulated work shift between 09:00 h and 17:30 h. Prior to the start of the simulated work shift (08:10 h) and immediately following the work shift (17:30 h), all participants completed a simulated commute, consisting of a 20-min driving task (Figure 1). A thigh-worn accelerometer (product details described in the next section) was attached to each participant on adaptation day, and it was removed on the recovery day. Physical activity levels were monitored continuously for the 7-day period via triaxial accelerometry. Only accelerometry data collected during each 20-min driving simulator task, as indicated in Figure 1, were used as input data for the classification by the CNNs.

2.4. Measures

2.4.1. Accelerometry

Leg movement data during each 20-min simulated drive were captured using triaxial accelerometry. The ActivPAL Micro 4 monitor (PAL Technologies, Glasgow, UK) was used, which is approximately 24 × 43 × 5 mm in size and weighs 9 g, with a sampling range of 20 Hz (+4 g to −4 g). The device was housed in a nitrile sleeve and attached to the anterior midline on the right thigh of each participant via an adhesive dressing and worn continuously throughout the 7-day study protocol. Raw accelerometer signals (x, y, z) were collected and processed from the raw uncompressed data files available through the ActivPAL software. The raw data signal files were extracted as CSV files for pre-processing, and training of the CNNs. Triaxial accelerometry was also used to objectively measure participants’ activity during the protocol, to confirm the ground truthing labels of the sitting or breaking up sitting conditions, using the ActivPAL algorithm to measure step count.

2.4.2. Driving Simulator

Driving performance was simulated using the York Driving Simulator at the start and end of each participant’s work shift (York Computer Technologies, Kingston, ON, Canada). The driving simulator was presented on a single computer screen displaying a forward-facing view from the driver’s position. The controls of the driving simulator consisted of a brake and accelerator pedal on the floor below the computer and a steering wheel attached to the desk. Participants completed a 20-min driving task consisting of a simulated daytime rural drive with a speed limit of 110 km/h. The driving scenario presented in the simulation consisted of a single carriageway and a two-laned road with traffic travelling in both directions. Participants were instructed to use the right leg (the leg with the ActivPAL attached) to operate the brake and accelerator whilst completing the driving task. This driving simulator task has been used previously in driving and in sleep restriction studies [1,42,43].

2.4.3. Sleep Monitoring

An activity monitor was continuously worn to objectively verify the sleep obtained in each condition (9-h or 5-h sleep opportunities). The Actical device (MiniMitter/Respironics, Bend, Oregon) is a wrist-worn activity monitor and is validated as a measure for sleep time and wakefulness [44]. The Actical device dimensions are 28 × 27 × 10 mm, and its weight is 17 g. This device uses a piezoelectric omnidirectional accelerometer with a range of 0.5 to 30 Hz to capture the frequency, intensity and duration of human movement [45]. The monitors were worn on the non-dominant wrist for the duration of the protocol (24 h per day)). Data were downloaded using Actical software (Phillips Actical MiniMitter, Respironics, Bend, OR, USA).

2.5. Statistical Analysis

Statistical analyses of sitting and sleep history (ground truthing labels) across experimental days (E1–E5) were performed using SPSS 26.0 Software (IBM Corp., Armonk, NY, USA). To confirm ground truthing labels for sleep history (i.e., distinct classes of 9-h and 5-h), an analysis of the total sleep time collected from actigraphy was conducted using independent t-tests. To confirm the ground truthing labels for sitting history (i.e., distinct classes of sitting or breaking up sitting), an independent t-test was performed for the step count retrieved from the ActivPAL. Data are reported as the Mean ± Standard Deviation. Statistical significance was set at p < 0.05.

2.6. Accelerometry Data Preparation

Uncompressed acceleration data, sampled at a rate of 20 Hz from the thigh-worn ActivPAL 4 was downloaded using ActivPAL software (V8.10.8.76, PAL Technologies, Glasgow, UK) and labelled according to the time-stamped driving simulator files (two drives per day) using a customised script developed using Python (Version 3.4, Python Software Foundation). The raw accelerometer files contained data for the entire protocol. To isolate the relevant data, the drive start times contained in the driving simulator data files were cross-referenced to the time column in the raw accelerometer file. For each participant, there were 10 raw accelerometry files (representing each of the 20-min drives across the 5 experimental days). For the classification of prior sitting history (sitting or breaking up sitting), the first drive on E1 was excluded from the input data, as the experimental manipulation of sitting and breaking up sitting had not begun prior to this drive. For the classification of sleep history, the first drive on E1 was included in the input data, as the participants had a 9-h or 5-h sleep opportunity the night prior. Acceleration data from each axis was scaled to the range –4 to 4, with a value of 1 being equal to the acceleration due to gravity on Earth (~9.81 m·s−2), i.e., 1 g.

2.7. Neural Network Architecture

2.7.1. DixonNet

Two network models were trained on the processed data. The first was an architecture initially presented by Dixon et al. [30] referred to as DixonNet hereafter (Figure 2). We chose to use DixonNet in this work due to the network’s efficacy as a classifier of an accelerometry dataset. The architecture of DixonNet is presented in Figure 1. The network operates on one-dimensional data consisting at each position of three channels: one channel for each of the x, y, and z axes found in the recorded accelerometry. The first two layers are convolutional, with each having 100 learned filters of a kernel size of 16 and a ReLU nonlinearity. These are followed by a max-pooling layer with a kernel size of 4, effectively quartering the length of the time-series provided while retaining the most significantly-activated features. Two more convolutional layers, configured identically to the first, conclude the spatial feature extraction portion of the network. Global average pooling, a dropout layer (p = 0.5) and one fully connected layer mapped these features to the number of outputs required for classification. In the current work, classification was binary; hence, the output size was set to one.

2.7.2. ResNet-18

A second architecture was explored: a custom implementation of ResNet-18 [43]. This implementation was identical to that presented in He and colleagues’ work [46], except with each two-dimensional convolution layer being replaced with a one-dimensional equivalent. This modification was made to permit training on time series data (such as accelerometry data), which is one-dimensional, rather than two-dimensional, like image data. The architecture of ResNet-18 is shown in Figure 3. ResNet-18 has an initial convolutional layer with a kernel size of 7 × 7 and 64 filters [35]. It is followed by a batch normalization and ReLU activation step. These initial layers are then fed to a max-pooling layer with a kernel size of 3 and a stride of 2. The network then contains 4 residual blocks. Each block consists of two convolutional layers with a kernel size of 3 × 3 and 64, 128, 256 and 512 filters for blocks 1, 2, 3 and 4, respectively. These blocks are connected with the residual connections detailed in He et al. [46], providing a residual learning function. The final residual block is followed by a global average pooling layer (average pooling with output size of 1) and a fully connected linear layer. As with DixonNet, the architecture was used for binary classification and, hence, had an output size of 1.

2.7.3. Model Settings and Training

During training, rather than providing the whole drive task duration of data to the network at once, shorter windows of the data were taken from each drive sequence. This approach was chosen to decrease the drive time needed to achieve classification. For a given window length, any valid subset of consecutive data points was available for use as a labelled training sample, and thus, overlapping windows were present in the training data. Window length was therefore a configurable hyperparameter of the training process. However, window length also influences the practicability of the model, since a window length dictates the duration of data required to obtain an accurate classification after training. We used a window length of 4096 samples or approximately 200 s. A binary cross-entropy loss function was used to evaluate the networks’ learnable parameters on a batch-by-batch basis. The batch size was set to 64 windows. A variant of gradient descent called Adaptive Moment Estimation (Adam) was used to update the networks’ learnable parameters once per batch in the direction that minimised the loss function. The learning rate was initially set to 10−4 and reduced by a factor of 10 each time the loss on the training data failed to improve for 10 consecutive epochs. Data were randomly shuffled, and an 80/20 splitting procedure was performed using 5-fold cross-validation. The accuracy and F-score (the harmonic mean of recall and precision) were used to assess model performance on the validation data during training. Both models were developed in Python using PyTorch [47], and trained on a consumer-grade computer (2× GPU: Nvidia GTX 1080, CUDA Cores 2560, VRAM 8 GB).

2.8. Class Activation Mapping (CAM)

Class Activation Mapping (CAM) is a method of indicating which areas within a window of input data contribute the most to the CNNs’ resultant classification. In essence, CAM may be used to generate a heatmap across a window of data, highlighting key characteristic regions. In line with previous studies which have utilised CAM for both time-series and object recognition, a global average pooling layer was used penultimately in both networks explored, followed by a single linear fully-connected layer [35,36,48]. In this network, let S k ( x ) represent the output of the last convolutional layer on the channel k. The output of channel k in the global average pooling layer can be expressed as f k = x S k ( x ) . The input of the final SoftMax function can be defined as
g c = k w k c x S k ( x )
where w k c is the weight in the fully connected layer representing the contribution of channel k to class c.
The class activation mapping for class c is represented as M c .
M c = k w k c S k ( x )
A visual interpretive analysis of the CAM was conducted to identify repeated patterns within the signals associated with individual classes. The most confident samples were assessed and selected for class activation mapping using logits. All samples were ranked by the magnitude of raw network output before the activation function was applied. The five highest magnitude samples and the five lowest magnitude samples of each network and class were chosen for visual interpretation.

3. Results

3.1. Ground Truth Verification

Total sleep time across all experimental days was different (p < 0.000) between the 9-h sleep history class (08:00 ± 00:40 h) and the 5-h sleep history class (04:34 ± 00:18 h). Step count across all experimental days was different (p = 0.031) between the sitting class (1231 ± 661) and the breaking up sitting class (7016 ± 742).

3.2. Model Performance

Table 2 presents a summary of the performance of each model for the classification of sitting history and sleep history. Figure 4 presents the average summary of each fold for model performance measured via F-score and training loss across time (epoch). The confusion matrices for each model are presented in Figure 5. Overall, ResNet-18 out-performed Dixon Net in terms of classification accuracy and for model performance, as seen in Figure 4.

3.3. Class Activation Mapping (CAM)

The class activation mapping for sitting and sleep history for both models is shown in Figure 6. Two example signals are provided in Figure 6 from each model and class combination representing exemplars (in the confidence of prediction) of each class prediction. The CAM for DixonNet classifying the sitting history shows that a key feature of the signal may be postural changes, as seen by the changes in acceleration (g) along the y and z signals. Postural changes are visible through differences in the way that the background acceleration due to gravity (1 g force) is distributed over the three axes. Different distributions of acceleration caused by the earth’s gravity therefore show different passive thigh angles, indicative of postural changes [49]. In contrast, ResNet CAM for activity highlights more movement along all three directions of the accelerometer. The CAM of the 5-h sleep history class classified by DixonNet displays periods of inactivity punctuated by sudden and rapid movements. In comparison, the CAM of the 9-h sleep history class for DixonNet shows inconsistent and smaller movements across time. The CAM of the ResNet model for the 5-h sleep history class highlights frequent rapid movements across time. The 9-h sleep history CAM, as classified by ResNet-18, demonstrates smaller movements across time, similar to the 9-h sleep history CAM from DixonNet.

4. Discussion

This study investigated the feasibility of applying a deep learning approach to classify prior sitting and sleep history from raw accelerometry data captured during a simulated drive. Two CNNs, DixonNet and ResNet-18, were able to accurately classify sitting and sleep history from raw accelerometry data without prior filtering or signal processing. These results demonstrate the potential application of raw accelerometry data to classify behaviours (i.e., sitting and sleep history) before they can become problematic for driving performance, with the potential to extend this research to future real-time classification.
In this study, ResNet-18 outperformed DixonNet for the classification of sitting and sleep history. ResNet-18 was more accurate than DixonNet and demonstrated less variability across the evaluation metrics. Both models were able to learn from the raw accelerometry data whilst avoiding overfitting, often seen with unbalanced datasets or due to overtraining, and which is a common problem for many deep learning approaches [50,51]. The high accuracy across the five-folds and the consistently decreasing loss shown in the current study demonstrates that this was robust and generalised learning. The model performance outcomes presented in this study are in line with previous CNN research classifying accelerometry data (e.g., fall detection, surface detection), which achieved high accuracy rates (95–97%) whilst demonstrating generalised learning [30,35]. The robust learning shown by both CNN models in this study further speaks to the suitability of applying a deep learning approach to sleep and activity classification using accelerometry data.
The class activation mapping (CAM) presented in this study visualizes key regions of the classified accelerometery signals that contributed strongly to the final classification. This visualization method provides a means to interpret the difference in physical movement seen between activity classes (i.e., sitting or breaking up sitting) and sleep history classes (i.e., 9-h or 5-h) during drives. The current study builds on previous work utilising CAMs for the visualisation and analysis of accelerometry data, which have, up until now, primarily focused on fall detection and classification [35,48]. The current study extends the application of CAMs to visualize the efficacy of a deep learning approach to the application of classifying driver states (e.g., inactive, sleepy). The CAM approach presented in this study provides a new way to visualize the previously researched physiological and behavioural impacts of sleep restriction (e.g., slower reaction time) on drivers [4,8,23,52,53]. The rapid and inconsistent movements displayed in the 5-h sleep history class by both networks may be indicative of the slower reaction times experienced by drivers who are sleep restricted [8,15,54,55]. In contrast, the consistent and smaller movements displayed in the CAM for the 9-h sleep history class from both models may indicate increased psychomotor functioning with more controlled inputs while driving (e.g., smoother braking and accelerating patterns) [15,54].
The CAM for the sitting history emphasises that different features were activated more strongly by the two models. The DixonNet CAM displays periods of limited or no movement for sitting and breaking up sitting classes, which may indicate that this model recognised and learnt postural changes rather than movement changes. In comparison, ResNet-18 was more highly activated by movements rather than postural changes. The movement patterns identified by ResNet-18 may reflect the differences in psychomotor function (e.g., reaction time) reported after breaking up sitting in previous research [11,56,57,58]. Previous studies have reported improvements in psychomotor function, reaction time, and attention for cognitive tasks after breaking up sitting with light-to-moderate intensity physical [13,57,59]. The highlighted patterns in movement via the CAM of ResNet-18 might be reflective of the improvements in psychomotor function. Unlike sleep history classification, there is limited literature outlining the exact characteristics of physical movements associated with potential behavioural changes during driving as a result of activity levels (i.e., sitting or breaking up sitting) during the day. This is the first study to report CAM analyses on physical leg movements, and it shows clear differences associated with activity levels throughout the day.
The deep learning approach in this study presents accuracy rates (75.7–88.6%) in line with previous deep learning physiological fatigue detection studies, which have achieved accuracy ranges between 74.9–96.0% [19,60]. Other studies utilising a deep learning approach for fatigue detection have developed customised hardware (e.g., Arduino smartwatch device, wearable glove device), require complex post-processing of data and recording multiple input signals (electroencephalogram, photoplethysmography) [27,61,62]. The methodology used in this study offers an alternative to customised hardware solutions and multi-signal approaches for fatigue detection whilst still maintaining classification accuracy rates in line with or higher than previous research. Additionally, the approach presented here may be more applicable to real-time classification of driver states than the previously mentioned classification methodologies, which are computationally less efficient and unsuitable for real-time classification.
The placement of the accelerometer is an important aspect to consider in physical activity and movement analysis. In previous studies, accelerometers have been positioned on the hip area, as placement closest to the centre of mass generally better reflects movement of the entire body, aiding classification [63]. However, centralised locations may not be useful in classifying subtle changes in movement, such as those involved in driving (e.g., at the extremities and hands/feet, which are used for driving) [63]. In the current study, thigh-worn accelerometer devices were chosen, as they are particularly useful for classifying driving behaviour for braking and accelerating events whist also capturing postural information. However, other placements, such as at the wrist, may provide information about physiological responses from steering [62,64]. Additional or new patterns of movement to detect behaviours (i.e., prior sitting or sleep history), could benefit from alternate placements of accelerometers in future classification studies.
There are limitations to this study that should be considered when interpreting the results. The sampling rate of the accelerometer used in this study was 20 Hz, a lower sampling rate than previous accelerometer classification studies (between 30–1024 Hz) [30,63,65]. Accurate assessment of movements requires the sampling signal to be double that of the signal of interest and for acceleration signals related to movement of the body to occur at or below 10 Hz [63]. Therefore, while the sampling rate used in this study was within the required range, a higher sampling rate may have improved the accuracy of the results of the models due to an increase in available data, in line with similar accelerometer classification studies [30,65,66]. This study was conducted with young healthy participants, limiting generalisability to real-world sedentary drivers, including older adults and people with health issues (e.g., cardiometabolic disorders, diagnosed sleep disorders) [67,68,69]. The monotonous driving scenario, consisting of low traffic density and high speeds used in this study, was chosen as it provides a sensitive measure of sustained attention and fatigue, particularly under sleep restriction [70]. However, the driving scenario used in this study may not be representative of a wide range of driving situations, particularly in cities with high-density traffic scenarios requiring constant braking and accelerating [71,72]. Therefore, the patterns of movement learnt by the models during this driving scenario may not be as applicable or accurate if applied to data collected in different traffic conditions.
There are several recommendations to build on the current findings. Adapting the current approach to detect prior sleep of a driver within the shortest time frame (i.e., smallest window size) would be a logical next step in this research. Detecting and classifying the sleep history of a driver could potentially highlight impairments sooner, before they put the driver or other road users at risk [62]. While the input of smaller window sizes may allow for earlier detection and classification of sleep history, this can lead to challenges such as reduced accuracy due to insufficient data input. Additionally, extending this research by training models to classify other sleep durations (i.e., 2 h per night) would be of benefit to identify fatigued drivers in populations such as professional drivers or new parents, where sleep opportunities may be less than 5 h per night [6,73].
Additional considerations to extend this research would be understanding the complexities of applying a CNN classification approach to real-world scenarios, rather than in a highly controlled laboratory environment. Although accelerometers are widely available in many smart devices and could be applied to classifying behaviours of drivers in uncontrolled environments (e.g., real-world driving scenarios), this also carries challenges of how to account for variables which may hinder the classification of driver states (i.e., sedentary or sleepy). Driving in a moving car creates additional noise in the accelerometer signal due to the vibration and inertial forces of the vehicle’s movements (i.e., speed variability of the vehicle) [74]. Both speed variability and vibration are unpredictable forces to control for in accelerometry data alone, as these factors would vary considerably based on road conditions and vehicle type [74]. Speed variability is particularly prevalent on urban roads due to rapidly changing traffic conditions, such as changes in traffic density, which requires constant and rapid changes in vehicle speed [75]. However, secondary signals, such as a global navigation satellite system, have been used in conjunction with accelerometers to account for speed variability in studies focusing on road surface classification [76,77]. Although there are a number of considerations to build on this approach to make it applicable to real-world driving scenarios, this research demonstrates the potential to classify changes in the driver’s state in real-time from thigh movements.

5. Conclusions

This study has demonstrated the effectiveness of raw accelerometry data for the classification of prior sitting and sleep history. Accuracy scores of up to 88% for the classification of prior sitting and sleep history from raw accelerometery were achieved without the need for extensive filtering or hand-crafted feature extraction. The findings further extend how accelerometry can assess impairments to safety critical tasks such as driving, providing a potential alternative to current driver fatigue detection methods using raw accelerometry data. Further, the results suggest that prior sitting history may lead to changes in movement patterns during driving, which are detectable with a deep learning approach. Whilst previous deep learning approaches such as computer vision may detect the appearance of driver states (e.g., fatigue, drowsiness), the approach presented here may provide more insight into the physical impacts of sitting and sleep during driving. Further, this information could be used to monitor and detect early signs of psychomotor impairment, allowing drivers to be better informed about their ability to safely operate a vehicle.

Author Contributions

Conceptualization, G.A.T., J.A.K. and J.D.K.; investigation, G.A.T., C.C.G. and G.E.V.; data curation, G.A.T. and J.A.K.; methodology, G.A.T., J.A.K. and J.D.K.; formal analysis, G.A.T., J.A.K. and J.D.K.; software, G.A.T., J.A.K. and J.D.K.; validation, J.A.K.; visualisation, G.A.T. and J.A.K.; writing—original manuscript, G.A.T.; writing—reviewing and editing, G.A.T., J.A.K., J.D.K., C.C.G., S.A.F. and G.E.V.; supervision, C.C.G., S.A.F. and G.E.V.; project administration; C.C.G., S.A.F. and G.E.V.; funding acquisition, S.A.F. and G.E.V. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Australian Research Council, grant number DP190101130.

Institutional Review Board Statement

The study was conducted according to the guidelines of the Declaration of Helsinki and approved by the Human Research Ethics Committee of Central Queensland University (0000021914).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Data are available from the authors upon request. The code is available here: https://github.com/keeeal/sleepy-and-sitting.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gupta, C.C.; Dorrian, J.; Grant, C.L.; Pajcin, M.; Coates, A.M.; Kennaway, D.J.; Wittert, G.A.; Heilbronn, L.K.; Della Vedova, C.B.; Banks, S. It’s not just what you eat but when: The impact of eating a meal during simulated shift work on driving performance. Chronobiol. Int. 2017, 34, 66–77. [Google Scholar] [CrossRef] [PubMed]
  2. Kosmadopoulos, A.; Sargent, C.; Zhou, X.; Darwent, D.; Matthews, R.W.; Dawson, D.; Roach, G.D. The efficacy of objective and subjective predictors of driving performance during sleep restriction and circadian misalignment. Accid. Anal. Prev. 2017, 99, 445–451. [Google Scholar] [CrossRef]
  3. Li, P.; Markkula, G.; Li, Y.; Merat, N. Is improved lane keeping during cognitive load caused by increased physical arousal or gaze concentration toward the road center? Accident 2018, 117, 65. [Google Scholar] [CrossRef]
  4. Jackson, M.L.; Croft, R.J.; Kennedy, G.A.; Owens, K.; Howard, M.E. Cognitive components of simulated driving performance: Sleep loss effects and predictors. Accid. Anal. Prev. 2013, 50, 438–444. [Google Scholar] [CrossRef] [PubMed]
  5. Davidović, J.; Pešić, D.; Lipovac, K.; Antić, B. The Significance of the Development of Road Safety Performance Indicators Related to Driver Fatigue. Transp. Res. Procedia 2020, 45, 333–342. [Google Scholar] [CrossRef]
  6. Meng, F.; Li, S.; Cao, L.; Li, M.; Peng, Q.; Wang, C.; Zhang, W. Driving Fatigue in Professional Drivers: A Survey of Truck and Taxi Drivers. Traffic Inj. Prev. 2015, 16, 474–483. [Google Scholar] [CrossRef] [PubMed]
  7. Vogelpohl, T.; Kühn, M.; Hummel, T.; Vollrath, M. Asleep at the automated wheel—Sleepiness and fatigue during highly automated driving. Accid. Anal. Prev. 2019, 126, 70–84. [Google Scholar] [CrossRef]
  8. Mahajan, K.; Velaga, N.R. Effects of Partial Sleep Deprivation on Braking Response of Drivers in Hazard Scenarios. Accid. Anal. Prev. 2020, 142, 105545. [Google Scholar] [CrossRef]
  9. Dunstan, D.W.; Wheeler, M.J.; Ellis, K.A.; Cerin, E.; Green, D.J. Interacting effects of exercise with breaks in sitting time on cognitive and metabolic function in older adults: Rationale and design of a randomised crossover trial. Ment. Health Phys. Act. 2018, 15, 11–16. [Google Scholar] [CrossRef]
  10. Edwardson, C.L.; Yates, T.; Biddle, S.J.H.; Davies, M.J.; Dunstan, D.W.; Esliger, D.W.; Gray, L.J.; Jackson, B.; O’Connell, S.E.; Waheed, G.; et al. Effectiveness of the Stand More AT (SMArT) Work intervention: Cluster randomised controlled trial. BMJ 2018, 363, 15. [Google Scholar] [CrossRef] [Green Version]
  11. Falck, R.S.; Davis, J.C.; Liu-Ambrose, T. What is the association between sedentary behaviour and cognitive function? A systematic review. Br. J. Sports Med. 2017, 51, 800–811. [Google Scholar] [CrossRef] [PubMed]
  12. Kline, C.E.; Hillman, C.H.; Bloodgood Sheppard, B.; Tennant, B.; Conroy, D.E.; Macko, R.F.; Marquez, D.X.; Petruzzello, S.J.; Powell, K.E.; Erickson, K.I. Physical activity and sleep: An updated umbrella review of the 2018 Physical Activity Guidelines Advisory Committee report. Sleep Med. Rev. 2021, 58, 101489. [Google Scholar] [CrossRef] [PubMed]
  13. Chrismas, B.C.R.; Taylor, L.; Cherif, A.; Sayegh, S.; Bailey, D.P. Breaking up prolonged sitting with moderate-intensity walking improves attention and executive function in Qatari females. PLoS ONE 2019, 14, e0219565. [Google Scholar] [CrossRef] [PubMed]
  14. Cyganek, B.; Gruszczyński, S. Hybrid computer vision system for drivers’ eye recognition and fatigue monitoring. Neurocomputing 2014, 126, 78–94. [Google Scholar] [CrossRef]
  15. Hooda, R.; Joshi, V.; Shah, M. A comprehensive review of approaches to detect fatigue using machine learning techniques. Chronic Dis. Transl. Med. 2021, 8, 26–35. [Google Scholar] [CrossRef]
  16. Aboagye, I.A.; Owusu-Banahene, W.; Amexo, K.; Boakye-Yiadom, K.A.; Sowah, R.A.; Sowah, N.L. Design and Development of Computer Vision-Based Driver Fatigue Detection and Alert System. In Proceedings of the 2021 IEEE 8th International Conference on Adaptive Science and Technology (ICAST), Accra, Ghana, 25–26 November 2021; pp. 1–6. [Google Scholar]
  17. Rezaee, K.; Alavi, S.R.; Madanian, M.; Ghezelbash, M.R.; Khavari, H.; Haddadnia, J. Real-time intelligent alarm system of driver fatigue based on video sequences. In Proceedings of the 2013 First RSI/ISM International Conference on Robotics and Mechatronics (ICRoM), Tehran, Iran, 13–15 February 2013; pp. 378–383. [Google Scholar]
  18. Ji, Q.; Zhu, Z.; Lan, P. Real-time nonintrusive monitoring and prediction of driver fatigue. IEEE Trans. Veh. Technol. 2004, 53, 1052–1068. [Google Scholar] [CrossRef]
  19. Albadawi, Y.; Takruri, M.; Awad, M. A Review of Recent Developments in Driver Drowsiness Detection Systems. Sensors 2022, 22, 2069. [Google Scholar] [CrossRef]
  20. Zhang, Y.; Hua, C. Driver fatigue recognition based on facial expression analysis using local binary patterns. Optik 2015, 126, 4501–4505. [Google Scholar] [CrossRef]
  21. Dong, L.; Cai, J. An Overview of Machine Learning Methods used in Fatigue Driving Detection. In Proceedings of the 2022 7th International Conference on Intelligent Information Technology, Foshan, China, 25–27 February 2022; pp. 65–69. [Google Scholar]
  22. Adão Martins, N.R.; Annaheim, S.; Spengler, C.M.; Rossi, R.M. Fatigue Monitoring Through Wearables: A State-of-the-Art Review. Front. Physiol. 2021, 12, 790292. [Google Scholar] [CrossRef]
  23. Cori, J.M.; Manousakis, J.E.; Koppel, S.; Ferguson, S.A.; Sargent, C.; Howard, M.E.; Anderson, C. An evaluation and comparison of commercial driver sleepiness detection technology: A rapid review. Physiol. Meas. 2021, 42, 074007. [Google Scholar] [CrossRef]
  24. Doudou, M.; Bouabdallah, A.; Berge-Cherfaoui, V. Driver Drowsiness Measurement Technologies: Current Research, Market Solutions, and Challenges. Int. J. Intell. Transp. Syst. Res. 2020, 18, 297–319. [Google Scholar] [CrossRef]
  25. Kundinger, T.; Sofra, N.; Riener, A. Assessment of the Potential of Wrist-Worn Wearable Sensors for Driver Drowsiness Detection. Sensors 2020, 20, 1029. [Google Scholar] [CrossRef] [PubMed]
  26. Sahayadhas, A.; Sundaraj, K.; Murugappan, M. Detecting Driver Drowsiness Based on Sensors: A Review. Sensors 2012, 12, 16937–16953. [Google Scholar] [CrossRef]
  27. Lee, B.; Lee, B.; Chung, W. Standalone Wearable Driver Drowsiness Detection System in a Smartwatch. IEEE Sens. J. 2016, 16, 5444–5451. [Google Scholar] [CrossRef]
  28. Edwardson, C.L.; Winkler, E.A.H.; Bodicoat, D.H.; Yates, T.; Davies, M.J.; Dunstan, D.W.; Healy, G.N. Considerations when using the activPAL monitor in field-based research with adult populations. J. Sport Health Sci. 2017, 6, 162–178. [Google Scholar] [CrossRef] [PubMed]
  29. Hallvig, D.; Anund, A.; Fors, C.; Kecklund, G.; Karlsson, J.G.; Wahde, M.; Åkerstedt, T. Sleepy driving on the real road and in the simulator—A comparison. Accid. Anal. Prev. 2013, 50, 44–50. [Google Scholar] [CrossRef]
  30. Dixon, P.C.; Schütte, K.H.; Vanwanseele, B.; Jacobs, J.V.; Dennerlein, J.T.; Schiffman, J.M.; Fournier, P.A.; Hu, B. Machine learning algorithms can classify outdoor terrain types during running using accelerometry data. Gait Posture 2019, 74, 176–181. [Google Scholar] [CrossRef]
  31. Fong, R.C.; Scheirer, W.J.; Cox, D.D. Using human brain activity to guide machine learning. Sci. Rep. 2018, 8, 5397. [Google Scholar] [CrossRef]
  32. Yamashita, R.; Nishio, M.; Do, R.K.G.; Togashi, K. Convolutional neural networks: An overview and application in radiology. Insights Into Imaging 2018, 9, 611–629. [Google Scholar] [CrossRef]
  33. Ferreira, J.; Carvalho, E.; Ferreira, B.V.; De Souza, C.; Suhara, Y.; Pentland, A.; Pessin, G. Driver behavior profiling: An investigation with different smartphone sensors and machine learning. PLoS ONE 2017, 12, e0174959. [Google Scholar] [CrossRef]
  34. Sheng, B.; Moosman, O.M.; Del Pozo-Cruz, B.; Del Pozo-Cruz, J.; Alfonso-Rosa, R.M.; Zhang, Y. A comparison of different machine learning algorithms, types and placements of activity monitors for physical activity classification. Measurement 2020, 154, 107480. [Google Scholar] [CrossRef]
  35. Shi, J.; Chen, D.; Wang, M. Pre-Impact Fall Detection with CNN-Based Class Activation Mapping Method. Sensors 2020, 20, 4750. [Google Scholar] [CrossRef] [PubMed]
  36. Zhou, B.; Khosla, A.; Lapedriza, A.; Oliva, A.; Torralba, A. Learning deep features for discriminative localization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2921–2929. [Google Scholar]
  37. Vincent, G.E.; Gupta, C.C.; Sprajcer, M.; Vandelanotte, C.; Duncan, M.J.; Tucker, P.; Lastella, M.; Tuckwell, G.A.; Ferguson, S.A. Are prolonged sitting and sleep restriction a dual curse for the modern workforce? a randomised controlled trial protocol. BMJ Open 2020, 10, e040613. [Google Scholar] [CrossRef] [PubMed]
  38. Chau, J.Y.; Daley, M.; Dunn, S.; Srinivasan, A.; Do, A.; Bauman, A.E.; van der Ploeg, H.P. The effectiveness of sit-stand workstations for changing office workers’ sitting time: Results from the Stand@Work randomized controlled trial pilot. Int. J. Behav. Nutr. Phys. Act. 2014, 11, 127. [Google Scholar] [CrossRef]
  39. Katzmarzyk, T.P. Standing and Mortality in a Prospective Cohort of Canadian Adults. Med. Sci. Sports Exerc. 2014, 46, 940–946. [Google Scholar] [CrossRef]
  40. Stamatakis, E.; Gale, J.; Bauman, A.; Ekelund, U.; Hamer, M.; Ding, D. Sitting Time, Physical Activity, and Risk of Mortality in Adults. J. Am. Coll. Cardiol. 2019, 73, 2062–2072. [Google Scholar] [CrossRef]
  41. Vincent, G.E.; Jay, S.M.; Sargent, C.; Kovac, K.; Vandelanotte, C.; Ridgers, N.D.; Ferguson, S.A. The impact of breaking up prolonged sitting on glucose metabolism and cognitive function when sleep is restricted. Neurobiol. Sleep Circadian Rhythm. 2018, 4, 17–23. [Google Scholar] [CrossRef]
  42. Gupta, C.C.; Centofanti, S.; Dorrian, J.; Coates, A.; Stepien, J.M.; Kennaway, D.; Wittert, G.; Heilbronn, L.; Catcheside, P.; Noakes, M.; et al. Altering meal timing to improve cognitive performance during simulated nightshifts. Chronobiol. Int. 2019, 36, 1691–1713. [Google Scholar] [CrossRef]
  43. Marottoli, R.; Allore, H.; Araujo, K.; Iannone, L.; Acampora, D.; Gottschalk, M.; Charpentier, P.; Kasl, S.; Peduzzi, P. A Randomized Trial of a Physical Conditioning Program to Enhance the Driving Performance of Older Persons. J. Gen. Intern. Med. 2007, 22, 590–597. [Google Scholar] [CrossRef]
  44. Marino, M.; Li, Y.; Rueschman, M.N.; Winkelman, J.W.; Ellenbogen, J.M.; Solet, J.M.; Dulin, H.; Berkman, L.F.; Buxton, O.M. Measuring Sleep: Accuracy, Sensitivity, and Specificity of Wrist Actigraphy Compared to Polysomnography. Sleep 2013, 36, 1747–1755. [Google Scholar] [CrossRef]
  45. Ridgers, N.D.; Fairclough, S. Assessing free-living physical activity using accelerometry: Practical issues for researchers and practitioners. Eur. J. Sport Sci. 2011, 11, 205–213. [Google Scholar] [CrossRef]
  46. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  47. Paszke, A.; Gross, S.; Chintala, S.; Chanan, G.; Yang, E.; DeVito, Z.; Lin, Z.; Desmaison, A.; Antiga, L.; Lerer, A. Automatic differentiation in pytorch. 2017. Available online: https://openreview.net/forum?id=BJJsrmfCZ (accessed on 18 July 2022).
  48. Kraft, D.; Srinivasan, K.; Bieber, G. Deep Learning Based Fall Detection Algorithms for Embedded Systems, Smartwatches, and IoT Devices Using Accelerometers. Technologies 2020, 8, 72. [Google Scholar] [CrossRef]
  49. Cuadrado, J.; Michaud, F.; Lugrís, U.; Pérez Soto, M. Using Accelerometer Data to Tune the Parameters of an Extended Kalman Filter for Optical Motion Capture: Preliminary Application to Gait Analysis. Sensors 2021, 21, 427. [Google Scholar] [CrossRef]
  50. Ignatov, A. Real-time human activity recognition from accelerometer data using Convolutional Neural Networks. Appl. Soft Comput. 2018, 62, 915–922. [Google Scholar] [CrossRef]
  51. Li, H.; Li, J.; Guan, X.; Liang, B.; Lai, Y.; Luo, X. Research on Overfitting of Deep Learning. In Proceedings of the 2019 15th International Conference on Computational Intelligence and Security (CIS), Macau, China, 13–16 December 2019; pp. 78–81. [Google Scholar]
  52. Filtness, A.J.; Beanland, V.; Miller, K.A.; Larue, G.S.; Hawkins, A. Sleep loss and change detection in simulated driving. Chronobiol. Int. 2020, 37, 1430–1440. [Google Scholar] [CrossRef] [PubMed]
  53. Philip, P.; Sagaspe, P.; Taillard, J.; Valtat, C.; Moore, N.; Åkerstedt, T.; Charles, A.; Bioulac, B. Fatigue, Sleepiness, and Performance in Simulated Versus Real Driving Conditions. Sleep 2005, 28, 1511–1516. [Google Scholar] [CrossRef]
  54. Otmani, S.; Pebayle, T.; Roge, J.; Muzet, A. Effect of driving duration and partial sleep deprivation on subsequent alertness and performance of car drivers. Physiol. Behav. 2005, 84, 715–724. [Google Scholar] [CrossRef]
  55. Van Dongen, H.P.; Maislin, G.; Mullington, J.M.; Dinges, D.F. The cumulative cost of additional wakefulness: Dose-response effects on neurobehavioral functions and sleep physiology from chronic sleep restriction and total sleep deprivation. Sleep 2003, 26, 117–126. [Google Scholar] [CrossRef]
  56. Wennberg, P.; Boraxbekk, C.-J.; Wheeler, M.; Howard, B.; Dempsey, P.C.; Lambert, G.; Eikelis, N.; Larsen, R.; Sethi, P.; Occleston, J.; et al. Acute effects of breaking up prolonged sitting on fatigue and cognition: A pilot study. BMJ Open 2016, 6, 9. [Google Scholar] [CrossRef]
  57. Chueh, T.Y.; Chen, Y.C.; Hung, T.M. Acute effect of breaking up prolonged sitting on cognition: A systematic review. BMJ Open 2022, 12, e050458. [Google Scholar] [CrossRef]
  58. Tuckwell, G.A.; Vincent, G.E.; Gupta, C.C.; Ferguson, S.A. Does breaking up sitting in office-based settings result in cognitive performance improvements which last throughout the day? A review of the evidence. Ind. Health 2022. [Google Scholar] [CrossRef]
  59. Mullane, S.L.; Buman, M.P.; Zeigler, Z.S.; Crespo, N.C.; Gaesser, G.A. Acute effects on cognitive performance following bouts of standing and light-intensity physical activity in a simulated workplace environment. J. Sci. Med. Sport 2017, 20, 489–493. [Google Scholar] [CrossRef] [PubMed]
  60. Chaabene, S.; Bouaziz, B.; Boudaya, A.; Hökelmann, A.; Ammar, A.; Chaari, L. Convolutional Neural Network for Drowsiness Detection Using EEG Signals. Sensors 2021, 21, 1734. [Google Scholar] [CrossRef] [PubMed]
  61. Lee, B.-G.; Park, J.-H.; Pu, C.-C.; Chung, W.-Y. Smartwatch-Based Driver Vigilance Indicator With Kernel-Fuzzy-C-Means-Wavelet Method. IEEE Sens. J. 2016, 16, 242–253. [Google Scholar] [CrossRef]
  62. Lee, B.G.; Lee, B.L.; Chung, W.Y. Smartwatch-based driver alertness monitoring with wearable motion and physiological sensor. Annu Int Conf IEEE Eng Med Biol Soc 2015, 2015, 6126–6129. [Google Scholar] [CrossRef] [PubMed]
  63. Arvidsson, D.; Fridolfsson, J.; Börjesson, M. Measurement of physical activity in clinical practice using accelerometers. J. Intern. Med. 2019, 286, 137–153. [Google Scholar] [CrossRef]
  64. Steeves, J.A.; Bowles, H.R.; McClain, J.J.; Dodd, K.W.; Brychta, R.J.; Wang, J.; Chen, K.Y. Ability of Thigh-Worn ActiGraph and activPAL Monitors to Classify Posture and Motion. Med. Sci. Sports Exerc. 2015, 47, 952–959. [Google Scholar] [CrossRef]
  65. Edwardson, C.L.; Rowlands, A.V.; Bunnewell, S.; Sanders, J.; Esliger, D.W.; Gorely, T.; O’Connell, S.; Davies, M.J.; Khunti, K.; Yates, T. Accuracy of Posture Allocation Algorithms for Thigh- and Waist-Worn Accelerometers. Med. Sci. Sports Exerc. 2016, 48, 1085–1090. [Google Scholar] [CrossRef]
  66. He, B.; Bai, J.; Zipunnikov, V.V.; Koster, A.; Caserotti, P.; Lange-Maia, B.; Glynn, N.W.; Harris, T.B.; Crainiceanu, C.M. Predicting Human Movement with Multiple Accelerometers Using Movelets. Med. Sci. Sports Exerc. 2014, 46, 1859–1866. [Google Scholar] [CrossRef]
  67. Inoue, Y.; Komada, Y. Sleep loss, sleep disorders and driving accidents. Sleep Biol. Rhythm. 2014, 12, 96–105. [Google Scholar] [CrossRef]
  68. Liu, S.-Y.; Perez, M.A.; Lau, N. The impact of sleep disorders on driving safety—Findings from the Second Strategic Highway Research Program naturalistic driving study. Sleep 2018, 41, zsy023. [Google Scholar] [CrossRef] [PubMed]
  69. Cori, J.M.; Gordon, C.; Jackson, M.L.; Collins, A.; Philip, R.; Stevens, D.; Naqvi, A.; Hosking, R.; Anderson, C.; Barnes, M.; et al. The impact of aging on driving performance in patients with untreated obstructive sleep apnea. Sleep Health 2021, 7, 652–660. [Google Scholar] [CrossRef]
  70. Akerstedt, T.; Ingre, M.; Kecklund, G.; Anund, A.; Sandberg, D.; Wahde, M.; Philip, P.; Kronberg, P. Reaction of sleepiness indicators to partial sleep deprivation, time of day and time on task in a driving simulator--the DROWSI project. J. Sleep Res. 2010, 19, 298–309. [Google Scholar] [CrossRef] [PubMed]
  71. Caponecchia, C.; Williamson, A. Drowsiness and driving performance on commuter trips. J. Saf. Res. 2018, 66, 179–186. [Google Scholar] [CrossRef] [PubMed]
  72. Ma, L.; Ye, R. Does daily commuting behavior matter to employee productivity? J. Transp. Geogr. 2019, 76, 130–141. [Google Scholar] [CrossRef]
  73. Sprajcer, M.; Crowther, M.E.; Vincent, G.E.; Thomas, M.J.W.; Gupta, C.C.; Kahn, M.; Ferguson, S.A. New parents and driver safety: What’s sleep got to do with it? A systematic review. Transp. Res. Part F: Traffic Psychol. Behav. 2022, 89, 183–199. [Google Scholar] [CrossRef]
  74. Johnson, D.A.; Trivedi, M.M. Driving style recognition using a smartphone as a sensor platform. In Proceedings of the 2011 14th International IEEE Conference on Intelligent Transportation Systems (ITSC), Washington, DC, USA, 5–7 October 2011; pp. 1609–1615. [Google Scholar]
  75. Cai, Q.; Abdel-Aty, M.; Mahmoud, N.; Ugan, J.; Al-Omari, M.M.A. Developing a grouped random parameter beta model to analyze drivers’ speeding behavior on urban and suburban arterials with probe speed data. Accid. Anal. Prev. 2021, 161, 106386. [Google Scholar] [CrossRef]
  76. Allouch, A.; Koubaa, A.; Abbes, T.; Ammar, A. RoadSense: Smartphone Application to Estimate Road Conditions Using Accelerometer and Gyroscope. IEEE Sens. J. 2017, 17, 4231–4238. [Google Scholar] [CrossRef]
  77. Basavaraju, A.; Du, J.; Zhou, F.; Ji, J. A Machine Learning Approach to Road Surface Anomaly Assessment Using Smartphone Sensors. IEEE Sens. J. 2020, 20, 2635–2647. [Google Scholar] [CrossRef]
Figure 1. Experimental design for the 7-night protocol.
Figure 1. Experimental design for the 7-night protocol.
Sensors 22 06598 g001
Figure 2. DixonNet convolutional neural network architecture. The final fully connected layer acts as the output layer for binary classification (i.e., sitting or breaking up sitting or 9-h or 5-h sleep history).
Figure 2. DixonNet convolutional neural network architecture. The final fully connected layer acts as the output layer for binary classification (i.e., sitting or breaking up sitting or 9-h or 5-h sleep history).
Sensors 22 06598 g002
Figure 3. ResNet-18 convolutional neural network architecture. The dotted blue lines represent the residual connections (or skip connections) utilised in this network. Conv = convolutional layers. The final fully connected layer acts as the output layer for binary classification (i.e., sitting or breaking up sitting or 9-h or 5-h sleep history).
Figure 3. ResNet-18 convolutional neural network architecture. The dotted blue lines represent the residual connections (or skip connections) utilised in this network. Conv = convolutional layers. The final fully connected layer acts as the output layer for binary classification (i.e., sitting or breaking up sitting or 9-h or 5-h sleep history).
Sensors 22 06598 g003
Figure 4. The average model performance across each fold for sitting and sleep history classification. Panel (A) evaluation accuracy (F-score) for sitting history; Panel (B) evaluation accuracy (F-score) for sleep history; Panel (C) natural log of the training loss for sitting history; Panel (D) natural log of the training loss for sleep history.
Figure 4. The average model performance across each fold for sitting and sleep history classification. Panel (A) evaluation accuracy (F-score) for sitting history; Panel (B) evaluation accuracy (F-score) for sleep history; Panel (C) natural log of the training loss for sitting history; Panel (D) natural log of the training loss for sleep history.
Sensors 22 06598 g004
Figure 5. Confusion matrices for each model for sitting and sleep history classification. (A) DixonNet sitting history; (B) DixonNet sleep history; (C) ResNet-18 sitting history; (D) ResNet-18 sleep history.
Figure 5. Confusion matrices for each model for sitting and sleep history classification. (A) DixonNet sitting history; (B) DixonNet sleep history; (C) ResNet-18 sitting history; (D) ResNet-18 sleep history.
Sensors 22 06598 g005
Figure 6. Class activation maps of classified acceleration signals produced from CNN outputs of DixonNet and ResNet-18 for sitting and sleep history. (A) DixonNet sitting history; (B) ResNet-18 sitting history; (C) DixonNet sleep history; (D) ResNet-18 sleep history. Each individual plotted signal represents the x, y and z axis of the accelerometer, respectively (from top to bottom). The colour of the line is representative of the strength in confidence of the class prediction (sitting or sleep history). The Y axis is centred on the mean of each signal ±0.5 g, meaning there is a constant 1-g scale for each plot. The heat map bar represents the strength of prediction of each class.
Figure 6. Class activation maps of classified acceleration signals produced from CNN outputs of DixonNet and ResNet-18 for sitting and sleep history. (A) DixonNet sitting history; (B) ResNet-18 sitting history; (C) DixonNet sleep history; (D) ResNet-18 sleep history. Each individual plotted signal represents the x, y and z axis of the accelerometer, respectively (from top to bottom). The colour of the line is representative of the strength in confidence of the class prediction (sitting or sleep history). The Y axis is centred on the mean of each signal ±0.5 g, meaning there is a constant 1-g scale for each plot. The heat map bar represents the strength of prediction of each class.
Sensors 22 06598 g006aSensors 22 06598 g006bSensors 22 06598 g006cSensors 22 06598 g006d
Table 1. Number of participants in each experimental condition (class).
Table 1. Number of participants in each experimental condition (class).
Sleep History
9-h5-h
Sitting historySitting2222
Breaking up sitting2020
Table 2. Summary performance of each model (across the 5-folds) for each class. Results are presented as the Mean ± SD.
Table 2. Summary performance of each model (across the 5-folds) for each class. Results are presented as the Mean ± SD.
ModelClassAccuracy (%)F-Score
Dixon NetSitting history77.24 (±2.61)0.76 (±0.03)
Sleep history75.71 (±2.69)0.76 (±0.02)
ResNet-18Sitting history 88.63 (±1.36)0.88 (±0.01)
Sleep history88.63 (±1.15)0.88 (±0.01)
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Tuckwell, G.A.; Keal, J.A.; Gupta, C.C.; Ferguson, S.A.; Kowlessar, J.D.; Vincent, G.E. A Deep Learning Approach to Classify Sitting and Sleep History from Raw Accelerometry Data during Simulated Driving. Sensors 2022, 22, 6598. https://doi.org/10.3390/s22176598

AMA Style

Tuckwell GA, Keal JA, Gupta CC, Ferguson SA, Kowlessar JD, Vincent GE. A Deep Learning Approach to Classify Sitting and Sleep History from Raw Accelerometry Data during Simulated Driving. Sensors. 2022; 22(17):6598. https://doi.org/10.3390/s22176598

Chicago/Turabian Style

Tuckwell, Georgia A., James A. Keal, Charlotte C. Gupta, Sally A. Ferguson, Jarrad D. Kowlessar, and Grace E. Vincent. 2022. "A Deep Learning Approach to Classify Sitting and Sleep History from Raw Accelerometry Data during Simulated Driving" Sensors 22, no. 17: 6598. https://doi.org/10.3390/s22176598

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop