Next Article in Journal
A Voronoi Diagram-Based Grouping Test Localization Scheme in Wireless Sensor Networks
Next Article in Special Issue
HeightNet: Monocular Object Height Estimation
Previous Article in Journal
Motion Video Recognition in Speeded-Up Robust Features Tracking
Previous Article in Special Issue
A Comprehensive Analysis of Proportional Intensity-Based Software Reliability Models with Covariates
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Classification of Roads and Types of Public Roads Using EOG Smart Glasses and an Algorithm Based on Machine Learning While Driving a Car

1
Department of Biosensors and Processing of Biomedical Signals, Faculty of Biomedical Engineering, Silesian University of Technology, Roosevelta 40, 41-800 Zabrze, Poland
2
Institute of Medical Informatics, University of Lübeck, Ratzeburger Allee 160, 23562 Lübeck, Germany
3
Operating Systems and Distributed Systems, University of Siegen, Hoelderlinstr. 3, 57068 Siegen, Germany
4
Department of Knowledge Engineering, University of Economics in Katowice, Bogucicka 3, 40-287 Katowice, Poland
5
Department of Conservative Dentistry with Endodontics, Faculty of Medical Science, Medical University of Silesia, Pl. Akademicki 17, 41-902 Bytom, Poland
*
Author to whom correspondence should be addressed.
Electronics 2022, 11(18), 2960; https://doi.org/10.3390/electronics11182960
Submission received: 31 July 2022 / Revised: 11 September 2022 / Accepted: 13 September 2022 / Published: 18 September 2022

Abstract

:
Driving a car is an activity that became necessary for exploration, even when living in the present world. Research exploring the topic of safety on the roads has therefore become increasingly relevant. In this paper, we propose a recognition algorithm based on physiological signals acquired from JINS MEME ES_R smart glasses (electrooculography, acceleration and angular velocity) to classify four commonly encountered road types: city road, highway, housing estate and undeveloped area. Data from 30 drivers were acquired in real driving conditions. Hand-crafted statistical features were extracted from the physiological signals to train and evaluate a random forest classifier. We achieved an overall accuracy, precision, recall and F1 score of 87.64%, 86.30%, 88.12% and 87.08% on the test dataset, respectively.

1. Introduction

The ability to move (mobility) is essential in our daily lives nowadays. One of the most widely used means of transportation is cars due to the increased convenience they provide compared with other means, such as public transport.
Statistics show that the estimated number of worldwide car sales has followed a constant increasing trend over the past 10 years, with the exception of the period between 2020 and 2021 that corresponded to the COVID-19 pandemic lockdown time [1]. Despite this, car sales started to raise again in 2021 after the end of most lockdown-related restrictions.
This increased usage of cars has led to a need for further exploration of car-related research areas, including driver behavior analysis [2] in relation to traffic, safety [3] and environmental issues [4,5,6].
Driving a car is an activity that involves many neural circuits in the brain that are related to visual-motor coordination, episodic and procedural memory, visual search and executive functions such as the ability to plan, change the strategy of conduct or initiate and inhibit reactions [7]. Most of the information that reaches the driver while driving a car takes place through the visual system, due to the need to perceive signs, objects, people or events [8]. Therefore, proper visual and motor coordination of the driver is necessary [9].
There are undeniable differences in road behavior between a driver who is just learning to drive and one who is already experienced [10]. An experienced driver performs the basic driving activities (e.g., turning the steering wheel, shifting gears or pressing the pedals) automatically without paying special attention to them. Analyzing the driving styles and behaviors of both new and experienced drivers could lead to insightful findings for the future development of the automotive industry by taking advantage of the advances in driving, which could be made economical and safer [11,12]. Research on driving style analysis in particular has possible applications to logistics, the transport industry, car insurance companies, government regulatory organizations, the controlled development of infrastructure and public transport [13].
One specific field of interest of driving style analysis is accident detection. According to a WHO report in 9 February 2004, road accidents are a major but neglected public health challenge that requires a concerted effort toward effective and sustainable prevention. Of all the systems people deal with on a daily basis, road traffic systems are the most complex and dangerous. It is estimated that 1.2 million people worldwide are killed in road accidents and as many as 50 million are injured each year. These numbers are projected to increase by around 65% over the next 20 years unless a new prevention commitment is made [14].
Studies on the topic of accident detection can be divided into two main categories. The first category focuses on accident prevention based either on the recognition of cognitive activities using wearable sensors during driving [15], detection of the road type using either vision-based or car internal sensors [16] or the monitoring of specific dangerous events during driving (e.g., wrong-way driving or a drop in vigilance) using either wearable or vision-based sensors [17,18,19]. The second category investigates the topic of post-crash detection for either the dispatching of emergency services [20,21] or as an inspection tool for car-renting companies [22,23,24].
In this paper, we investigate the first category of work through the topic of road type detection. To the best of our knowledge, the number of past studies regarding this topic is very limited, and they exclusively use either vision-based or car internal sensors. Jo et al. for instance [25] proposed a vehicle-tracking and behavior-reasoning algorithm to provide advanced driver assistance using LIDAR, radar and RGB camera data to obtain insight on the surroundings of the vehicle. Ramanishka et al. [26] introduced the Honda Research Institute Driving Dataset, a dataset combining RGB and car modalities (GPS, LIDAR, CAN-Bus and IMU sensors) in suburban, urban and highway environments to help researchers investigate the topic of automated driving scene analysis. Finally, many other studies have been proposed in the past to automatically analyze the surroundings of a vehicle without necessarily directly classifying road types. Examples include the monitoring of cars in neighboring lanes in a highway environment [27] or of weather or traffic conditions in highway and urban environments [28]. It can be noted from the aforementioned work that the topic of road type detection using wearable devices is still unexplored.
We therefore propose a system for the automatic detection of road types based on applying state-of-the-art machine learning techniques on physiological data acquired from a pair of smartglasses. We use in our study the JINS Meme glasses (Jins Inc., Tokyo, Japan) that record electrooculography (EOG), characterizing eye movements, and the linear acceleration and angular velocity, characterizing head movements. The data acquired by such a device have been shown to be useful for recognizing cognitive activities in a driving context in past studies [15]. We try to verify if road type detection is possible on the basis of solely using wearable modalities in particular, without any assistance from other external sensors such as car internal sensors. To the best of our knowledge, this study is the first to tackle this specific classification problem exclusively using physiological sensor modalities.
To abstract the problem of road type detection from a machine learning point of view, we translate it into a classification problem and solve it using standard supervised learning techniques, following the standard pattern recognition chain that comprises the following steps [29]:
  • Data acquisition: the choice and set-up of sensors and design of an experimental set-up;
  • Data processing: operations to clean the data such as noise removal, filtering, synchronization and segmentation;
  • Feature extraction: computations of specific values in the data that carry a specific relevance for the classification problem to be solved;
  • Classification: training and evaluation of a classifier operating on vectors of the features previously extracted.
In particular, we focus on four types of areas to recognize: a city, a highway, a housing estate and an undeveloped area [30]. We manually extracted statistical features from the data following our past observations that more complex feature-learning methods using deep neural networks tend to not necessarily produce better results for problems where physiological time series data are involved and data are relatively scarce [22,31]. Using these features, we perform a comparative study of the most widely used state-of-the-art classifiers and a feature analysis based on the computation of feature importance scores using ANOVA.
To summarize, the main contributions of our paper are as follows:
  • We perform a machine learning study for the classification of four different road types (city road, highway, housing estate and undeveloped area) using manually crafted features extracted from physiological signals (EOG, acceleration and angular velocity) acquired from smart glasses worn by the driver;
  • We perform a comparative experiment involving several state-of-the-art classifiers to find the best configuration for the classification problem to be solved;
  • We perform feature selection with ANOVA to determine what the top-performing features are for the classification problem at stake.
The rest of our paper is structured as follows. The materials and methods used in our study are first described in Section 2. The results of our experiments are presented in Section 3 and discussed in Section 4. Finally, a conclusion and comment about future outlooks are provided in Section 5.

2. Materials and Methods

2.1. Technology Used

For the data acquisition, we used JINS MEME smart glasses, a device furnished with three-point EOG and a six-axis inertial measurement unit (IMU) with an accelerometer and a gyroscope [15,32]. The sampling frequency of the acquired signals was 100 Hz. The data were transmitted to a computer via Bluetooth or USB and could be exported to a CSV file.
For the data preparation and classification, we used MATLAB R2022a. The models were trained using a PC with an Intel(R) Core(TM) i5-9300H CPU processor and 16 GB of RAM.

2.2. Data Acquisition

A dataset of physiological data obtained by the JINS Meme smart glasses (i.e., EOG, IMU linear acceleration and angular velocity), was acquired for this and previous related studies [8,33]. The study was conducted in accordance with Chapter 4 of the Act on Vehicle Drivers of the Republic of Poland and with the permit issued by the Provincial Police Department in Katowice. Before the study, the volunteers presented all the necessary documents confirming that they were allowed to participate in road traffic. The participants voluntarily gave their informed consent to participate in the study.
Data were acquired under real road conditions from 30 healthy subjects, including 20 experienced drivers and 10 students attending a driving school [34,35,36]. There were 16 males and 14 females with an average age of 38 ± 17 who participated in the study. The complete dataset is available at the IEEE DataPort [37].
Each participant performed a driving test on a route of 28.7 km, which took approximately 75 min. The route was localized in the Silesian Voivodeship (southern Poland) in the cities of Tarnowskie Góry, Radzionków, Bytom, and Piekary Śląskie. The course of the route was determined in consultation with the driving instructor and on the basis of the rules of practical driving tests in Poland. While performing a test, the driver was wearing the smart glasses. The set-up of the experiment is presented in Figure 1.
All data were labeled during the drive and then divided the into four groups regarding the type of the road (1: highway; 2: city road; 3: undeveloped area; 4: housing estate). The labeling process was performed manually by simply putting a marker when the particular type of road started and ended.

2.3. Data Preparation

The data acquired from smart glasses include signals from three axes of the accelerometer ( A C C X , A C C Y and A C C Z ), signals from three axes of the gyroscope ( G Y R O X , G Y R O Y and G Y R O Z ), and the four channels of the EOG signal ( E O G L , E O G R , E O G H and E O G V ). Table 1 contains general statistics for all the acquired signals.
The magnitude of the acceleration was computed using the Euclidean norm of the 3D acceleration, as shown in Equation (1) [38]:
A C C = A C C X 2 + A C C Y 2 + A C C Z 2
All signals were then filtered using the third-order median filter and rescaled to the range [ 0 1 ] using min-max normalization, as specified in Equation (2):
X r e s c a l e d = X m i n X m a x X m i n X ,
where the min and max values were determined for each signal. All normalized sensor signals were then segmented with a sliding time window approach with a window length of 1 s (100 samples) and a 50% stride (50 samples). We applied the original label to each window. If the signal length was not divisible by 50, then the last samples were discarded.

2.4. Feature Extraction

For this problem, we decided to apply traditional feature engineering. For each window, the following features were calculated:
  • μ and σ of a normal distribution fitted to the data [39];
  • Skewness and kurtosis;
  • Minimum and maximum value.
The feature extraction process resulted in 36,669 feature vectors of 48 dimensions (6 features for each of the 8 sensor channels) that were used to train, validate and test the classifiers.
Finally, feature selection was also performed using ANOVA on each feature separately to determine which ones maximized the distance between the four classes we used in our problem. Figure 2 and Figure 3 present the F scores and associated p-values, respectively, for each feature on which ANOVA was applied.
Through the trial-and-error method, it was decided to select the 30 features that maximized the ANOVA F score, as it was the configuration that produced the best results.

2.5. Classification

For this study, we used a tool available in MATLAB called Classification Learner [40]. It enabled quickly training, validating and testing many classifiers, tuning the parameters and comparing the results. The models available in this tool are different types of widely used supervised machine learning classifiers, whose list is provided below:
  • Classification trees;
  • Model with Gaussian, multinomial or kernel predictors (nearest neighbors);
  • K-nearest neighbor;
  • Support vector machine;
  • Boosting, random forest, bagging, random subspace and ECOC ensembles for multiclass learning (generalized additive model).
Data were randomly separated into training, validation and test datasets in proportions of 70%, 20% and 10%, respectively, and the validation scheme was holdout validation. We performed a series of tests on different classifiers and hyperparameters, compared them and chose the one that best handled the problem.
The preliminary experiments showed that most misclassifications were concerning examples from classes 1 and 2. To reduce their numbers and improve the performances of our models overall, we introduced classification penalty costs to make examples belonging to specific classes count more in the computation of the loss during the training of the models. The matrix of such costs is provided in Table 2.
Table 3 presents the comparison of the overall performance parameters of the four best classifiers from the initial tests. A detailed comparison of results obtained with these classifiers can be found in Appendix A.
The initial tests showed that the problem was significantly better when solved by an ensemble classifier. For this reason, we decided to continue working on improving only this model.
In order to obtain the best possible classification accuracy, we decided to conduct a hyperparameter optimization process. The method was Bayesian optimization, and the acquisition function was expected in terms of improvement per second. Figure 4 shows the process of tuning the hyperparameters.
In this process, different sets of hyperparameters were tested, and the set with the smallest classification error was selected. The hyperparameters tested include the following:
  • Preset: specifies the type of the classifier to be used. The types available were boosted trees, bagged trees, subspace discriminant, subspace KNNs and RUSBoost trees.
  • Ensemble method: a method to meld the weak learners into a model with a high-quality. There was a different choice of ensemble methods for each preset.
  • Number of learners: this parameter defines a number of weak learners to use in the ensemble;
  • Learning rate: regulates the speed of the learning process. Using a smaller learning rate helped to make sure that the model was not overfitted;
  • Maximum number of splits: this parameter controlled the depth of the tree learners (i.e., “branch points”).
The best classification model that we found was ensemble classification with random undersampling boosted trees (RUSBoosted Trees) [41], a random forest classifier where each weak learner (e.g., decision tree) is trained on a random subset of the whole training set that is undersampled when it comes to the dominant class. More specifically, RUSBoosted Trees iteratively trains a chosen number of weak learners, with each learner being trained on a subset of the training set that underwent two modifications: a random subsampling of the dominant class and a normalized weighting of the examples of the training subset, which is taken into account when computing the loss of the learner. The weights are initialized to follow a uniform distribution for the first iteration and then iteratively updated using an update parameter computed using the loss at the previous iteration. The model hyperparameters that were determined to be optimal in the end were as follows:
  • Preset: RUSBoosted Trees;
  • Ensemble method: AdaBoost;
  • Learner type: decision tree;
  • Maximum number of splits: 2078;
  • Number of learners: 451;
  • Learning rate: 0.81048.

3. Results

To evaluate the relevance of our trained classifier, we calculated some standard evaluation metrics computed from the confusion matrix of the classifier (shown in Figure 5), which presented the number of examples from the test dataset classified into a specific group (predicted label) compared with their real class label (true label).
We also compute the accuracy, precision, recall and average F1 score, whose expressions are provided in Equations (3)–(6):
A c c u r a c y = T r u e P o s i t i v e + T r u e N e g a t i v e T r u e P o s i t i v e + F a l s e P o s i t i v e + T r u e N e g a t i v e + F a l s e N e g a t i v e
P r e c i s i o n = T r u e P o s i t i v e T r u e P o s i t i v e + F a l s e P o s i t i v e
R e c a l l = T r u e P o s i t i v e T r u e P o s i t i v e + F a l s e N e g a t i v e
F 1 - S c o r e = 2 × P r e c i s i o n × R e c a l l P r e c i s i o n + R e c a l l
Finally, we used the receiver operator characteristic (ROC) as a tool to assess the correctness of the classifier. It provided a joint description of the sensitivity and specificity; in other words, it can be described as a graph of dependency between the true positive rate and false positive rate [42,43]. In a multi-class model, we could plot the N number of the area under the curve (AUC) ROC curves for the N number of classes using the one vs. all methodology. In our case, we had a class 1 threshold, minimizing or maximizing the distributions overlapping when the AUC was around 0.9, which means that there was a 90% chance that the model would be able to distinguish a positive class from a negative class [44]. For comparison, the work related to driver drowsiness or anger evaluation with the ROC-AUC showed that the AUCs for the models were 0.904, 0.863 and 0.805, related to the threshold balancing classification between the AUC and ROC curves for drowsiness [45], and 0.7914–0.8635 for driver anger evaluation [46]. Figure 6, Figure 7, Figure 8 and Figure 9 present the ROC curves for all four classes.
The values obtained for all the aforementioned evaluation metrics (accuracy, precision, recall, average F1 score and AUC) are provided in Table 4.
The proposed approach yielded an overall accuracy of 87.64%. Among the four specific classes, housing estates were the best recognized. The highway was the least well-recognized class, although its recognition accuracy remained relatively high for a four-class classification problem at 83.77%.

4. Discussion

The topic of this paper was the development of a classification algorithm based on machine learning techniques for road type classification. The problem faced by this work was to determine whether physiological data acquired with smart glasses (EOG, ACC and GYRO signals) are suitable for classifying the type of road traveled by the person driving the car.
The best classifier that was found (RUSBoosted Trees) yielded a promising accuracy of 87.64% for a four-class classification problem. This relatively high accuracy indicates that physiological data acquired from JINS MEME smart glasses (EOG, acceleration and angular velocity) are sufficient to determine the type of road being traveled. It should be noted that the ANOVA analysis we performed on our features showed that the most relevant information to our classification problem resided in the IMU data, as shown in Figure 2, and more specifically the angular velocity measured by the gyroscope. This might indicate that head movements (rather than eye movements) are one of the main factor that could lead to the distinction of a road type.
A possible axis of development is the investigation of additional physiological modalities that could be set up in an unobtrusive way for the driver to provide insights regarding the environment they are driving in. For instance, Leicht et al. [47] investigated the monitoring of the heart rate (HR) and respiration rate (RR) of drivers in both urban and rural scenarios. More specifically, the efficiency of unobtrusive sensors for obtaining an estimation of both the HR and RR was evaluated in both real and simulated conditions by comparing the estimations derived from them to the readings of the reference HR and RR sensors. Under laboratory conditions, magnetic induction and photoplethysmography, both integrated into the seat belt, and hybrid imaging, combining visual and thermal imaging, were evaluated for respiration rate (RR) sensing. In real driving, the sensing of the RR by hybrid imaging and sensing of the heart rate (HR) by a seat-integrated capacitive electrocardiograph (ECG) were evaluated. Under laboratory conditions, reliable RR detection was possible using all three sensor technologies. In real-world driving, reliable HR and RR detection was possible during the rural scenario only. In the urban scenario, only RR detection was feasible. Due to motion artifacts, the capacitive ECG was disturbed, and the HR detection was impaired. The evaluated unobtrusive measurement systems can monitor physiological parameters during, for example, driving for a long time on highways, but they could not reliably accomplish this during agile inner city driving situations due to motion artifacts.
The overview literature devoted to research on transport infrastructure cannot be separated from the aspects of artificial intelligence that support the management and organization of modern transport and logistics. The field of computer science that deals with the practical application of algorithms for data analysis, based mainly on machine learning, aims to create an automatic system that, based on accumulated experience and knowledge, will be able to detect driving style patterns in the processed data, predict future events and also react to them, such as during real travel [17]. Among the practical applications of machine learning systems, the following can be distinguished. First, research about recognizing elements in the image may also inspire analyzing single or multi-modal signals such as EOG [48,49], recognition of speech by detection using wearable devices around the head with all interference caused by such devices [50], written text, navigation in an unknown area, recommendation systems, guidance, forecasting financial and economic trends, and more pioneering research can open up new areas [51].
Finally, possible improvements regarding the machine learning aspects of the study could be tested. The investigation of several methods of signal analysis and their impact on the results might be advantageous [52,53]. More advanced feature extraction methods such as feature learning with deep neural networks (e.g. convolutional neural networks) [29] could be investigated once the dataset size is increased to a point where enough data are available to properly train such models. Time-series transfer learning techniques to refine the performances of these models could also be investigated [54].

5. Conclusions

In this paper, a machine learning method using physiological data acquired from smart glasses for the detection of road types while driving a car was presented. A pair of JINS Meme smart glasses collecting the EOG, linear acceleration and angular velocity was used by 30 subjects driving a car in real-life conditions. Statistical features were manually extracted from the data and used to train a classifier for the recognition of four different road types (city road, highway, housing estate and undeveloped area). A comparative study between various state-of-the-art classifiers was carried out and led to a best overall accuracy of 87.64% using boosted trees. Additionally, a feature importance calculation based on ANOVA showed that the most important features were coming from head movements.
Despite the very promising results obtained by our proposed approach, our study still has some major limitations. The most important one is that we used a single dataset for our analysis due to the lack of publicly available data that deal with a similar problem. Moreover, the dataset was limited in size because of sanitary restrictions caused by the COVID-19 pandemic, which might limit the generalization capacities of our method. Finally, it is currently not possible to compare our results to other studies due to the lack of past research working on a similar problem.
Future works will focus on increasing the amount of data used for such a study. This will first accomplished by resuming and extending the data acquisition campaign in real conditions that were interrupted by the COVID-19 pandemic. Alternative approaches based on acquiring synthetic data using simulators will also be tested, as they provide a relatively cost-effective way to acquire additional data that could be used to boost the generalization capacity of our trained classifiers [55,56,57]. Finally, additional unobtrusive sensor-monitoring physiological modalities such as the HR or RR [47] will be considered in future studies.

Author Contributions

Conceptualization, R.D., N.P. and F.L.; methodology, R.D. and N.P.; software, N.P.; validation, N.P., K.D. and F.L.; formal analysis, R.D. and N.P.; investigation, R.D. and F.L.; resources, R.D. and K.M.-P.; data curation, N.P.; writing—original draft preparation, R.D. and N.P.; writing—review and editing, R.D, N.P., K.D., H.H.P. and F.L.; visualization, N.P.; supervision, R.D., M.G. and E.T.; project administration, R.D.; funding acquisition, M.G. and E.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

This study was conducted in accordance with the Declaration of Helsinki, and the protocol was approved by the Bioethics Committee of the Medical University of Silesia on 16 October 2018 (KNW/0022/KB1/18).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are openly available in IEEE DataPort at https://dx.doi.org/10.21227/4yte-5s06.

Acknowledgments

We would like to thank the volunteers who participated in the study and the driving school for providing the opportunity to acquire signals on learner drivers.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
EOGelectrooculogram
RRrespiration rate
HRheart rate
ECGelectrocardiogram
EMGelectromyograpram
ROCreceiver operator characteristic
AUCarea under the ROC curve
SVMsupport vector machine
KNNk-nearest neighbors
NNneural network

Appendix A. Comparison of Classifiers

To justify the choice of the final classifier we ended up using, we present a comparison of the four best solutions from different groups in this appendix. For all the classifiers, we used the same validation scheme (holdout validation) and the same data partition scheme (70, 20, 10).

Appendix A.1. Support Vector Machine

From the group of SVM classifiers, the best results were achieved with the following hyperparameters:
  • Preset: cubic SVM;
  • Kernel function: cubic;
  • Kernel scale: automatic;
  • Box constraint level: 1;
  • Multiclass method: one vs. one;
  • Standardize data: true.
Figure A1 and Figure A2 and Table A1 present the performance of aforementioned model.
Figure A1. Confusion matrix of the SVM classifier.
Figure A1. Confusion matrix of the SVM classifier.
Electronics 11 02960 g0a1
Figure A2. ROC curves of the SVM classifier.
Figure A2. ROC curves of the SVM classifier.
Electronics 11 02960 g0a2
Table A1. Evaluation metrics of the SVM classifier on the test dataset.
Table A1. Evaluation metrics of the SVM classifier on the test dataset.
Type of RoadAccuracyPrecisionRecallF1 ScoreAUC
City road0.680.610.680.650.89
Highway0.720.780.720.750.89
Housing estate0.690.630.690.660.90
Undeveloped area0.820.840.820.830.94
Overall0.740.720.730.72

Appendix A.2. k-Nearest Neighbors

From the group of KNN classifiers, the best results were achieved with the following hyperparameters:
  • Preset: Fine KNN;
  • Number of neighbors: 1;
  • Distance metric: Euclidean;
  • Distance weight: equal;
  • Standardize data: true.
Figure A3 and Figure A4 and Table A2 present the performance of the model.
Figure A3. Confusion matrix of kNN.
Figure A3. Confusion matrix of kNN.
Electronics 11 02960 g0a3
Figure A4. ROC curves of kNN.
Figure A4. ROC curves of kNN.
Electronics 11 02960 g0a4
Table A2. Evaluation metrics of the kNN classifier on the test dataset.
Table A2. Evaluation metrics of the kNN classifier on the test dataset.
Type of RoadAccuracyPrecisionRecallF1 ScoreAUC
City road0.640.650.640.650.78
Highway0.730.730.730.730.80
Housing estate0.670.670.670.670.80
Undeveloped area0.810.810.810.810.87
Overall0.730.720.720.72

Appendix A.3. Neural Networks

From the group of NN classifiers, the best results were achieved with the following hyperparameters:
  • Preset: wide neural networks;
  • Number of fully connected layers: 2;
  • First layer size: 100;
  • Second layer size: 10;
  • Activation: ReLU;
  • Regularization strength (Lambda): 0;
  • Standardize data: yes.
Figure A5 and Figure A6 and Table A3 present the performance of the model.
Figure A5. Confusion matrix of neural networks.
Figure A5. Confusion matrix of neural networks.
Electronics 11 02960 g0a5
Table A3. Evaluation metrics of the neural network classifier on the test dataset.
Table A3. Evaluation metrics of the neural network classifier on the test dataset.
Type of RoadAccuracyPrecisionRecallF1 ScoreAUC
City road0.710.660.710.690.91
Highway0.780.830.780.800.93
Housing estate0.720.680.720.700.90
Undeveloped area0.840.860.840.850.95
Overall0.770.760.760.76
Figure A6. ROC curves of neural networks.
Figure A6. ROC curves of neural networks.
Electronics 11 02960 g0a6

Appendix A.4. Ensemble Classifiers

From the group of ensemble classifiers, the best results were achieved with the following hyperparameters:
  • Preset: bagged trees;
  • Ensemble method: bag;
  • Learner type: decision tree;
  • Maximum number of splits: 33,002;
  • Number of learners: 30.
Figure A7 and Figure A8 and Table A4 present the performance of the model.
Figure A7. Confusion matrix of the ensemble classifier.
Figure A7. Confusion matrix of the ensemble classifier.
Electronics 11 02960 g0a7
Figure A8. ROC curves of the ensemble classifier.
Figure A8. ROC curves of the ensemble classifier.
Electronics 11 02960 g0a8
Table A4. Evaluation metrics of the ensemble classifier on the test dataset.
Table A4. Evaluation metrics of the ensemble classifier on the test dataset.
Type of RoadAccuracyPrecisionRecallF1 ScoreAUC
City road0.840.750.840.790.95
Highway0.820.870.820.840.96
Housing estate0.870.770.870.820.96
Undeveloped area0.860.910.860.880.98
Overall0.840.820.850.83

References

  1. Estimated Worldwide Motor Vehicle Production from 2000 to 2021. Available online: https://www.statista.com/statistics/262747/worldwide-automobile-production-since-2000/ (accessed on 1 September 2022).
  2. Zangi, N.; Srour-Zreik, R.; Ridel, D.; Chasidim, H.; Borowsky, A. Driver distraction and its effects on partially automated driving performance: A driving simulator study among young-experienced drivers. Accid. Anal. Prev. 2022, 166, 106565. [Google Scholar] [CrossRef] [PubMed]
  3. Henriksson, J.; Borg, M.; Englund, C. Automotive Safety and Machine Learning: Initial Results from a Study on How to Adapt the ISO 26262 Safety Standard. In Proceedings of the 2018 IEEE/ACM 1st International Workshop on Software Engineering for AI in Autonomous Systems (SEFAIAS), Gothenburg, Sweden, 28 May 2018; pp. 47–49. [Google Scholar]
  4. Schipor, O.A.; Vatavu, R.D.; Vanderdonckt, J. Euphoria: A Scalable, event-driven architecture for designing interactions across heterogeneous devices in smart environments. Inf. Softw. Technol. 2019, 109, 43–59. [Google Scholar] [CrossRef]
  5. Kumar, A.; Kumar, R.; Aggarwal, A. S2RC: A multi-objective route planning and charging slot reservation approach for electric vehicles considering state of traffic and charging station. J. King Saud Univ.-Comput. Inf. Sci. 2022, 34, 2192–2206. [Google Scholar] [CrossRef]
  6. Wang, K.; Yang, J.; Li, Z.; Liu, Y.; Xue, J.; Liu, H. Naturalistic Driving Scenario Recognition with Multimodal Data. In Proceedings of the IEEE Computer Society, Paphos, Cyprus, 6–9 June 2022; pp. 476–481. [Google Scholar] [CrossRef]
  7. Przewłócka, A.; Sitek, E.J.; Tarnowski, A.; Sławek, J. Zdolność do prowadzenia pojazdów w chorobach neurozwyrodnieniowych przebiegających z otępieniem. Pol. Przegląd Neurol. 2015, 11, 117–127. [Google Scholar]
  8. Doniec, R.J.; Sieciński, S.; Duraj, K.M.; Piaseczna, N.J.; Mocny-Pachońska, K.; Tkacz, E.J. Recognition of drivers’ activity based on 1D convolutional neural network. Electronics 2020, 9, 2002. [Google Scholar] [CrossRef]
  9. Tian, R.; Ruan, K.; Li, L.; Le, J.; Greenberg, J.; Barbat, S. Standardized evaluation of camera-based driver state monitoring systems. IEEE/CAA J. Autom. Sin. 2019, 6, 716–732. [Google Scholar] [CrossRef]
  10. Li, X.; Vaezipour, A.; Rakotonirainy, A.; Demmel, S. Effects of an in-vehicle eco-safe driving system on drivers’ glance behaviour. Accid. Anal. Prev. 2019, 122, 143–152. [Google Scholar] [CrossRef]
  11. Tement, S.; Musil, B.; Plohl, N.; Horvat, M.; Stojmenova, K.; Sodnik, J. Assessment and Profiling of Driving Style and Skills. In User Experience Design in the Era of Automated Driving; Riener, A., Jeon, M., Alvarez, I., Eds.; Studies in Computational Intelligence; Springer International Publishing: Cham, Switzerland, 2022; pp. 151–176. [Google Scholar] [CrossRef]
  12. Kraft, A.K.; Maag, C.; Cruz, M.I.; Baumann, M.; Neukum, A. Effects of explaining system failures during maneuver coordination while driving manual or automated. Accid. Anal. Prev. 2020, 148, 105839. [Google Scholar] [CrossRef]
  13. Al-Mheiri, M.; Kais, O.; Bonny, T. Car Plate Recognition Using Machine Learning. In Proceedings of the 2022 Advances in Science and Engineering Technology International Conferences (ASET), Dubai, United Arab Emirates, 21–24 February 2022; pp. 1–6. [Google Scholar] [CrossRef]
  14. Racioppi, F. Preventing Road Traffic Injury: A Public Health Perspective for Europe; World Health Organization Regional Office for Europe: Copenhagen, Denmark, 2004. [Google Scholar]
  15. Banerjee, S.; Khadem, N.K.; Kabir, M.M.; Jeihani, M. Driver Behavior Post Cannabis Consumption: A Driving Simulator Study in Collaboration with Montgomery County Maryland. arXiv 2021, arXiv:2112.12026. [Google Scholar]
  16. Kim, B.; Baek, Y. Sensor-Based Extraction Approaches of In-Vehicle Information for Driver Behavior Analysis. Sensors 2020, 20, 5197. [Google Scholar] [CrossRef]
  17. Haghighat, A.; Sharma, A. A Computer Vision-Based Deep Learning Model to Detect Wrong-Way Driving Using Pan–Tilt–Zoom Traffic Cameras. Comput.-Aided Civ. Infrastruct. Eng. 2022. [Google Scholar] [CrossRef]
  18. Tian, Y.; Cao, J. Fatigue driving detection based on electrooculography: A review. EURASIP J. Image Video Process. 2021, 2021, 33. [Google Scholar] [CrossRef]
  19. Zheng, W.L.; Gao, K.; Li, G.; Liu, W.; Liu, C.; Liu, J.Q.; Wang, G.; Lu, B.L. Vigilance Estimation Using a Wearable EOG Device in Real Driving Environment. IEEE Trans. Intell. Transp. Syst. 2019, 21, 170–184. [Google Scholar] [CrossRef]
  20. Yadav, N.; Thakur, U.; Poonia, A.; Chandel, R. Post-Crash Detection and Traffic Analysis. In Proceedings of the 2021 8th International Conference on Signal Processing and Integrated Networks (SPIN), Noida, India, 26–27 August 2021; pp. 1092–1097. [Google Scholar] [CrossRef]
  21. Choi, J.G.; Kong, C.W.; Kim, G.; Lim, S. Car crash detection using ensemble deep learning and multimodal data from dashboard cameras. Expert Syst. Appl. 2021, 183, 115400. [Google Scholar] [CrossRef]
  22. Hozhabr Pour, H.; Li, F.; Wegmeth, L.; Trense, C.; Doniec, R.; Grzegorzek, M.; Wismüller, R. A Machine Learning Framework for Automated Accident Detection Based on Multimodal Sensors in Cars. Sensors 2022, 22, 3634. [Google Scholar] [CrossRef]
  23. Vahidi, A.; Eskandarian, A. Research advances in intelligent collision avoidance and adaptive cruise control. IEEE Trans. Intell. Transp. Syst. 2003, 4, 143–153. [Google Scholar] [CrossRef]
  24. Kashevnik, A.; Lashkov, I.; Gurtov, A. Methodology and Mobile Application for Driver Behavior Analysis and Accident Prevention. IEEE Trans. Intell. Transp. Syst. 2019, 21, 2427–2436. [Google Scholar] [CrossRef]
  25. Jo, K.; Lee, M.; Kim, J.; Sunwoo, M. Tracking and behavior reasoning of moving vehicles based on roadway geometry constraints. IEEE Trans. Intell. Transp. Syst. 2016, 18, 460–476. [Google Scholar] [CrossRef]
  26. Ramanishka, V.; Chen, Y.T.; Misu, T.; Saenko, K. Toward driving scene understanding: A dataset for learning driver behavior and causal reasoning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 7699–7707. [Google Scholar]
  27. Peng, J.; Guo, Y.; Fu, R.; Yuan, W.; Wang, C. Multi-parameter prediction of drivers’ lane-changing behaviour with neural network model. Appl. Ergon. 2015, 50, 207–217. [Google Scholar] [CrossRef]
  28. Park, J.; Abdel-Aty, M.; Wu, Y.; Mattei, I. Enhancing in-vehicle driving assistance information under connected vehicle environment. IEEE Trans. Intell. Transp. Syst. 2018, 20, 3558–3567. [Google Scholar] [CrossRef]
  29. Li, F.; Shirahama, K.; Nisar, M.; Köping, L.; Grzegorzek, M. Comparison of Feature Learning Methods for Human Activity Recognition Using Wearable Sensors. Sensors 2018, 18, 679. [Google Scholar] [CrossRef]
  30. Moon, S.E.; Kim, J.H.; Kim, S.W.; Lee, J.S. Prediction of Car Design Perception Using EEG and Gaze Patterns. IEEE Trans. Affect. Comput. 2021, 12, 843–856. [Google Scholar] [CrossRef]
  31. Gouverneur, P.; Li, F.; Adamczyk, W.M.; Szikszay, T.M.; Luedtke, K.; Grzegorzek, M. Comparison of Feature Extraction Methods for Physiological Signals for Heat-Based Pain Recognition. Sensors 2021, 21, 4838. [Google Scholar] [CrossRef] [PubMed]
  32. JINS MEME. JINS MEME Glasses Specifications. Available online: https://jins-meme.com/en/researchers/specifications/ (accessed on 23 June 2020).
  33. Doniec, R.; Sieciński, S.; Piaseczna, N.; Mocny-Pachońska, K.; Lang, M.; Szymczyk, J. The Classifier Algorithm for Recognition of Basic Driving Scenarios. In Information Technology in Biomedicine; Piętka, E., Badura, P., Kawa, J., Więcławek, W., Eds.; Springer Nature Switzerland AG: Cham, Switzerland, 2020; pp. 359–367. [Google Scholar] [CrossRef]
  34. Habibifar, N.; Salmanzadeh, H. Relationship between driving styles and biological behavior of drivers in negative emotional state. Transp. Res. Part F Traffic Psychol. Behav. 2022, 85, 245–258. [Google Scholar] [CrossRef]
  35. Muqeet, M.A.; Mohiuddin, R.; Thaniserikaran, A.; Ahmed, I.; Subrahmanyam, K.A.; Podapati, A. Self-Car Driving using Artificial Intelligence and Image Processing. Int. J. Res. Eng. Sci. Manag. 2022, 5, 23–30. [Google Scholar]
  36. Payyanadan, R.P.; Angell, L.S. A Framework for Building Comprehensive Driver Profiles. Information 2022, 13, 61. [Google Scholar] [CrossRef]
  37. Doniec, R.; Piaseczna, N.; Li, F. A Dataset for Classification of Road and Types Using EOG Smart Glasses. Available online: https://ieee-dataport.org/documents/dataset-classification-road-and-types-using-eog-smart-glasses (accessed on 12 September 2022). [CrossRef]
  38. Jia, Y.; Tyler, C.W. Measurement of saccadic eye movements by electrooculography for simultaneous EEG recording. Behav. Res. Methods 2019, 51, 2139–2151. [Google Scholar] [CrossRef] [Green Version]
  39. Fitting Pprobability Distribution to Data—MATLAB Documentation. Available online: https://www.mathworks.com/help/stats/fitdist.html (accessed on 20 June 2022).
  40. MATLAB—Classification Learner Documentation. Available online: https://www.mathworks.com/help/stats/classificationlearner-app.html (accessed on 20 June 2022).
  41. Seiffert, C.; Khoshgoftaar, T.M.; Van Hulse, J.; Napolitano, A. RUSBoost: A Hybrid Approach to Alleviating Class Imbalance. IEEE Trans. Syst. Man Cybern.-Part A Syst. Hum. 2010, 40, 185–197. [Google Scholar] [CrossRef]
  42. Fan, J.; Upadhye, S.; Worster, A. Understanding receiver operating characteristic (ROC) curves. Can. J. Emerg. Med. 2016, 8, 19–20. [Google Scholar] [CrossRef]
  43. Flach, P.A. ROC Analysis. In Encyclopedia of Machine Learning and Data Mining; Sammut, C., Webb, G.I., Eds.; Springer: New York, NY, USA, 2016; pp. 1–8. [Google Scholar] [CrossRef]
  44. Kumar, R.; Indrayan, A. Receiver operating characteristic (ROC) curve for medical researchers. Indian Pediatr. 2011, 48, 277–287. [Google Scholar] [CrossRef]
  45. Farhangi, F.; Sadeghi-Niaraki, A.; Nahvi, A.; Razavi-Termeh, S.V. Spatial modelling of accidents risk caused by driver drowsiness with data mining algorithms. Geocarto Int. 2022, 37, 2698–2716. [Google Scholar] [CrossRef]
  46. Wan, P.; Wu, C.; Lin, Y.; Ma, X. Optimal Threshold Determination for Discriminating Driving Anger Intensity Based on EEG Wavelet Features and ROC Curve Analysis. Information 2016, 7, 52. [Google Scholar] [CrossRef]
  47. Leicht, L.; Walter, M.; Mathissen, M.; Antink, C.H.; Teichmann, D.; Leonhardt, S. Unobtrusive Measurement of Physiological Features Under Simulated and Real Driving Conditions. IEEE Trans. Intell. Transp. Syst. 2022, 23, 4767–4777. [Google Scholar] [CrossRef]
  48. Hassanien, A.E. Virtual and Augmented Reality for Automobile Industry: Innovation Vision and Applications; Springer Nature: New York, NY, USA, 2022; Google-Books-ID: Rk1hEAAAQBAJ. [Google Scholar]
  49. Maltezos, E.; Lioupis, P.; Dadoukis, A.; Karagiannidis, L.; Ouzounoglou, E.; Krommyda, M.; Amditis, A. A Video Analytics System for Person Detection Combined with Edge Computing. Computation 2022, 10, 35. [Google Scholar] [CrossRef]
  50. Nagatomo, K.; Yasuda, M.; Yatabe, K.; Saito, S.; Oikawa, Y. Wearable SELD Dataset: Dataset for Sound Event Localization and Detection Using Wearable Devices around Head. Available online: http://xxx.lanl.gov/abs/2202.08458 (accessed on 28 July 2022).
  51. Lagodzinski, P.; Shirahama, K.; Grzegorzek, M. Codebook-based electrooculography data analysis towards cognitive activity recognition. Comput. Biol. Med. 2017, 95, 277–287. [Google Scholar] [CrossRef]
  52. Hua, X.; Ono, Y.; Peng, L.; Xu, Y. Unsupervised Learning Discriminative MIG Detectors in Nonhomogeneous Clutter. IEEE Trans. Commun. 2022, 70, 4107–4120. [Google Scholar] [CrossRef]
  53. Wax, M.; Adler, A. Detection of the Number of Signals by Signal Subspace Matching. IEEE Trans. Signal Process. 2021, 69, 973–985. [Google Scholar] [CrossRef]
  54. Li, F.; Shirahama, K.; Nisar, M.A.; Huang, X.; Grzegorzek, M. Deep Transfer Learning for Time Series Data Based on Sensor Modality Classification. Sensors 2020, 20, 4271. [Google Scholar] [CrossRef]
  55. Ros, G.; Sellart, L.; Materzynska, J.; Vazquez, D.; Lopez, A.M. The SYNTHIA Dataset: A Large Collection of Synthetic Images for Semantic Segmentation of Urban Scenes. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 1–26 July 2016; pp. 3234–3243. [Google Scholar] [CrossRef]
  56. Hurl, B.; Czarnecki, K.; Waslander, S. Precise Synthetic Image and LiDAR (PreSIL) Dataset for Autonomous Vehicle Perception. arXiv 2019, arXiv:1905.00160. [Google Scholar]
  57. Fabbri, M.; Braso, G.; Maugeri, G.; Cetintas, O.; Gasparini, R.; Osep, A.; Calderara, S.; Leal-Taixe, L.; Cucchiara, R. MOTSynth: How Can Synthetic Data Help Pedestrian Detection and Tracking? arXiv 2021, arXiv:2108.09518. [Google Scholar]
Figure 1. Experiment set-up.
Figure 1. Experiment set-up.
Electronics 11 02960 g001
Figure 2. All features ranked by decreasing ANOVA F scores.
Figure 2. All features ranked by decreasing ANOVA F scores.
Electronics 11 02960 g002
Figure 3. The p-Values associated to the ANOVA test for each feature. The features are ranked in order of decreasing F-score. It can be noted that with the exception of two features, all p-values were below 0.05 , meaning that their associated F-scores were significant.
Figure 3. The p-Values associated to the ANOVA test for each feature. The features are ranked in order of decreasing F-score. It can be noted that with the exception of two features, all p-values were below 0.05 , meaning that their associated F-scores were significant.
Electronics 11 02960 g003
Figure 4. Hyperparameter optimizing process.
Figure 4. Hyperparameter optimizing process.
Electronics 11 02960 g004
Figure 5. Confusion matrix for the test dataset.
Figure 5. Confusion matrix for the test dataset.
Electronics 11 02960 g005
Figure 6. ROC: class 1 (positive).
Figure 6. ROC: class 1 (positive).
Electronics 11 02960 g006
Figure 7. ROC: class 2 (positive).
Figure 7. ROC: class 2 (positive).
Electronics 11 02960 g007
Figure 8. ROC: class 3 (positive).
Figure 8. ROC: class 3 (positive).
Electronics 11 02960 g008
Figure 9. ROC: class 4 (positive).
Figure 9. ROC: class 4 (positive).
Electronics 11 02960 g009
Table 1. Dataset description: general statistics.
Table 1. Dataset description: general statistics.
SignalMinimumMedianMaximumMeanStandard Deviation
A C C X −12,08334019,5191070.21616.9
A C C Y −32,767−45319,199−1342.91853.3
A C C Z −32,768−212832,767−8230.87017.4
G Y R O X −10,57512251911.8098141.3119
G Y R O Y −3465344273.1471132.9203
G Y R O Z −10,2791410,07310.7835345.0692
E O G L −20484702047379.7323640.3718
E O G R −2048−69204741.6679730.8121
E O G H −40801894095338.0645825.8395
E O G V −2047−612048−210.6461548.9904
Table 2. Misclassification costs.
Table 2. Misclassification costs.
Predicted Class
1234
10211
True Class22011
31101
41110
Table 3. Comparative evaluation metrics of state-of-the-art classifiers on the test dataset. Bold font indicates best result obtained for each metric.
Table 3. Comparative evaluation metrics of state-of-the-art classifiers on the test dataset. Bold font indicates best result obtained for each metric.
ClassifierAccuracyPrecisionRecallF1 Score
SVM0.740.720.730.72
KNN0.730.720.720.72
Neural networks0.770.760.760.76
Ensemble classifiers0.840.820.850.83
Table 4. Performance parameters for the test dataset.
Table 4. Performance parameters for the test dataset.
Type of RoadAccuracyPrecisionRecallF1 ScoreAUC
Highway0.87550.81550.87550.84450.98
City road0.83770.90370.83770.86950.97
Undeveloped area0.90350.80500.90350.85140.99
Housing estate0.90810.92780.90810.91790.99
Overall0.87640.86300.88120.8708
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Doniec, R.; Piaseczna, N.; Li, F.; Duraj, K.; Pour, H.H.; Grzegorzek, M.; Mocny-Pachońska, K.; Tkacz, E. Classification of Roads and Types of Public Roads Using EOG Smart Glasses and an Algorithm Based on Machine Learning While Driving a Car. Electronics 2022, 11, 2960. https://doi.org/10.3390/electronics11182960

AMA Style

Doniec R, Piaseczna N, Li F, Duraj K, Pour HH, Grzegorzek M, Mocny-Pachońska K, Tkacz E. Classification of Roads and Types of Public Roads Using EOG Smart Glasses and an Algorithm Based on Machine Learning While Driving a Car. Electronics. 2022; 11(18):2960. https://doi.org/10.3390/electronics11182960

Chicago/Turabian Style

Doniec, Rafał, Natalia Piaseczna, Frédéric Li, Konrad Duraj, Hawzhin Hozhabr Pour, Marcin Grzegorzek, Katarzyna Mocny-Pachońska, and Ewaryst Tkacz. 2022. "Classification of Roads and Types of Public Roads Using EOG Smart Glasses and an Algorithm Based on Machine Learning While Driving a Car" Electronics 11, no. 18: 2960. https://doi.org/10.3390/electronics11182960

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop