Next Article in Journal
Electrospinning of High-Performance Nanofibres: State of the Art and Insights into the Path Forward
Previous Article in Journal
Optimizing Two-Dimensional Irregular Packing: A Hybrid Approach of Genetic Algorithm and Linear Programming
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Detection of Aggressive Driving Patterns in Two-Wheeled Vehicles Using Sensor-Based Approaches

Department of Geoinformatics, University of Seoul, 163 Seoulsiripdae-ro, Dongdaemun-gu, Seoul 02504, Republic of Korea
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(22), 12475; https://doi.org/10.3390/app132212475
Submission received: 12 October 2023 / Revised: 29 October 2023 / Accepted: 15 November 2023 / Published: 18 November 2023

Abstract

:
The growing concerns over road safety and the increasing popularity of two-wheeled vehicles highlight the need to address aggressive driving behaviors in this context. Understanding and detecting such behaviors can significantly contribute to rider safety and accident prevention. The primary aim of this research is to develop an effective method for detecting aggressive driving patterns, specifically focusing on rapid turns and lane-change maneuvers using two-wheeled vehicles. To achieve this objective, we conducted a survey to establish criteria for aggressive driving. Subsequently, we collected data through a virtual simulator, implementing staged aggressive driving scenarios. The data underwent preprocessing, feature engineering, and deep learning model training for detection. The results of this study demonstrate the successful detection of aggressive driving patterns, including rapid turns and lane changes, using sensor data. The criterion for rapid turns is specified as a significant change in sensor values within 1 s. In the CNN-LSTM model for aggressive lane changes, the precision for normal driving is 0.97, and the overall accuracy for aggressive driving is 95%. Our approach, which relies on sensor technology rather than impractical camera systems, showcases the potential for enhancing rider safety in two-wheeled vehicles. In conclusion, this research provides valuable insights into the detection of aggressive driving patterns in two-wheeled vehicles. By leveraging sensor data and innovative methods, it offers promising implications for improving rider safety and accident prevention in the future.

1. Introduction

The rapid expansion of lifestyle logistics services requires proactive measures to address safety issues related to two-wheeled vehicles. In 2022, the delivery service market in South Korea reached approximately 26.5 trillion won, exhibiting a remarkable growth of 973% compared to 2017, with a steep annual growth rate of 162% (KOSTAT, http://www.kostat.go.kr, accessed on 1 June 2023). The shift to non-face-to-face consumption patterns due to the COVID-19 pandemic has led to an increased frequency of two-wheeled vehicle usage, resulting in a rise in delivery-related traffic accidents. Particularly concerning is the growing prevalence of traffic accidents involving efficient two-wheeled vehicles, given their accessibility and cost advantages. According to the Ministry of Land, Infrastructure and Transport, as of 2022, the total number of traffic accident fatalities in South Korea decreased by 6.21% compared to the previous year, reaching 2735 deaths [1]. However, the number of fatalities involving two-wheeled vehicles increased by 5%, resulting in 484 additional deaths, which underscores the necessity for enhanced safety measures for such vehicles.
The data collection system and quantitative indicators for evaluating the safe driving of two-wheeled vehicle operators are insufficient. For commercial vehicles, such as buses and taxis, there are defined criteria for 11 risky driving behaviors. Judgment criteria for each violation are established through analysis of discomfort metrics of passengers using pupil reaction and electrocardiogram tests, and a comprehensive safety driving index is developed based on violation-specific weights (eTAS). In the case of data collection systems for commercial vehicles, real-time data such as GPS, speed, RPM, brake usage, and distance traveled are collected using a Digital Tachograph (DTG) to analyze vehicle operation patterns. However, the absence of data collection and analysis platforms for two-wheeled vehicles results in a significant lack of foundational data. Two-wheeled vehicles have greater maneuverability than four-wheeled vehicles, enabling more aggressive driving behaviors like rapid turns and lane changes. Due to these reasons, methods for detecting and evaluating aggressive driving behavior in two-wheeled vehicles are limited. As a result, studies on safe driving indices for two-wheeled vehicles have been conducted based on restricted assessment criteria, such as speed, acceleration, and work environment [2]. The detection methods for driving behaviors like rapid turns and rapid lane changes, which can lead to accidents, are also inadequate.
Simulation technology and artificial intelligence (AI) based on data are necessary for developing methods to detect aggressive driving behaviors using two-wheeled vehicles. Due to the absence of a data collection system for two-wheeled vehicles, acquiring real-world data on aggressive driving behaviors with these vehicles is practically challenging. Therefore, there is a need to use simulation technology that can replicate real-world situations to acquire data. Additionally, considering the free-spirited driving behaviors of two-wheeled vehicles, AI technologies based on time-series data are essential for detecting aggressive driving. Time-series-based AI technologies can track driving characteristics such as speed, acceleration, and rotation over time [3]. These technologies can effectively distinguish between normal driving patterns and abnormal driving patterns associated with aggressive driving. They take advantage of the ability to differentiate between these patterns [4].
Meanwhile, unlike conventional passenger cars and trucks, two-wheeled vehicles have limitations in terms of the detection devices that can be installed due to the nature of the vehicle. As a result, compared to vehicles with accessible onboard sensors, such as camera sensors used in driving videos, two-wheeled vehicles have fewer options for detecting devices. Hence, it becomes essential to utilize the limited data available from simulations to consider the potential attachment of detection devices to actual two-wheeled vehicles in the future.
The limited data that can be obtained from simulators include sensor data like speed, acceleration, and angular velocity. Based on this premise, this study aims to develop a method for detecting aggressive driving behaviors in simulated two-wheeled vehicles using the limited available data. Through this, using solely limited sensor data resources, we would like to contribute to the traffic safety field by presenting a method for setting classification criteria for aggressive driving of two-wheeled vehicles with free movement.
The structure of this study consists of the following: Section 2, which covers the literature review; Section 3, which outlines the research methodology; Section 4, which covers applications; and Section 5, which concludes this study.

2. Literature Review

Aggressive vehicle driving has traditionally been assessed based on factors such as sudden acceleration, abrupt braking, rapid lane changes, and sharp turns. However, for two-wheeled vehicles, their relatively more agile and maneuverable nature, compared to conventional passenger cars and trucks, often leads to frequent safety accidents caused by excessive, rapid lane changes and turns [1].
In the context of rapid lane changes, research has predominantly utilized image-based algorithms to detect lane changes [5,6,7,8,9], often incorporating information about surrounding vehicles [3,10,11,12,13]. J. Gao et al. (2020) employed the LaneNet model to detect lanes from black-box vehicle dashcam footage, determining lane change based on the distance from the lane’s center [9]. Buhet et al. (2019) used RGB images from a vehicle’s camera sensor in the Carla simulator to detect lane change [8]. Chauhan et al. (2020) utilized the YOLOv3 model to detect leading vehicles in black-box video and identified rapid lane change through the zigzag pattern of detected object movements in video frames. However, applying the methods used in existing research for lane-change detection, which often relies on attaching image sensors for lane detection, is practically challenging for two-wheeled vehicles due to limitations in sensor placement. The distinctive free-form driving patterns of two-wheeled vehicles impose limitations on the extraction of rapid lane changes from video footage when equipping them with image sensors. In particular, individual lane-change detection using images from individual vehicle cameras presents constraints. Jang et al. (2017) tackled this differently, detecting illegal lane changes from CCTV footage by tracking individual vehicle trajectories [5]. This approach focuses on utilizing movement characteristics and trajectories rather than solely relying on individual vehicle imagery for detection.
Woo et al. (2017) utilized the relative velocity between the target vehicle attempting to perform a lane change and its front and rear vehicles to distinguish between zigzag driving within the lane and an actual lane change [11]. They employed an SVM model for detection. Similarly, L. Zhang et al. (2019) used SVM and decision tree models with information about surrounding vehicles to detect lane change [10]. Y. Zhang et al. (2022) applied the XGBoost model to predict lane change using driving trajectory data, incorporating various information such as angular velocity and distance from surrounding vehicles [3]. They emphasized utilizing information from the surrounding traffic flow for predicting lane change and improved model accuracy by adjusting the segment near the lane-change point. While traditional machine learning techniques can be used to detect lane change, it remains challenging to equip individual two-wheeled vehicles with devices capable of detecting information about surrounding vehicles. On the other hand, C. Zhao et al. (2022) estimated lane change using data on relative distances and driving speeds of adjacent vehicles [12]. They compared various machine learning models and a multilayer perceptron in terms of architectural complexity. Interestingly, even with a simpler architecture, like a perceptron layer, models incorporating deep learning outperformed traditional machine learning models.
Furthermore, there have been attempts to detect rapid lane change using simple sensor data [14]. In their study, Eftekhari and Ghatee (2019) classified vehicle behaviors, including lane changes, rotations, and U-turns, using only smartphone sensors such as accelerometers, magnetometers, and gyroscopes. The experiment involved 20 real-world drivers, which imposes limitations on applying complex techniques like artificial neural networks due to the relatively small dataset. While there were cases where lane changes were misclassified as rotations, the overall accuracy was 87%, indicating the potential for accuracy improvement with increased data availability. Thus, in this study, an attempt is made to detect rapid lane change using trajectory data obtained solely from individual two-wheeled vehicle sensors like GNSS and GPS without utilizing image-based algorithms or information about surrounding traffic. Additionally, this study aims to enhance detection accuracy through data acquisition from simulations.

3. Research Methodology

Figure 1 represents the overall framework of this research, which is structured as follows. Initially, criteria for aggressive driving were established through surveys since determining stages of aggressive driving can be subjective. To collect data on these stages, a survey was conducted with 100 participants using videos depicting various levels of aggressive driving scenarios simulated in a virtual simulator. Subsequently, the collected data were employed to develop an aggressive driving detection model. Detailed information regarding data preprocessing for aggressive driving detection and the developed model will be elucidated in the subsequent sections. Finally, the model’s performance is evaluated through practical case studies.
In this study, the detection criteria for aggressive driving on two-wheeled vehicles were narrowed down to rapid turn and rapid lane change. Rapid turn was considered suitable for aggressive driving detection, as it accounted for more than half of all two-wheeled vehicle accidents in Seoul, South Korea, in 2021 (KoROAD, https://taas.koroad.or.kr, accessed on 1 June 2023). Rapid lane change was considered in comparison to rapid turn due to their limited distinction based on the time scale in Figure 2. Figure 2 illustrates the temporal changes in the z-axis gyroscope values for rapid turn and rapid lane change. As evident in the figure, distinct differences in the movement patterns of rapid turns and rapid lane change can be observed. In the case of rapid turns (Figure 1 and Figure 2a), the maximal change in values occurs within 1 s, while for rapid lane change (Figure 2b), it is within 0.5 s. In addition, the maximum values of the z-axis angular velocity clearly differentiate between rapid turns and rapid lane changes. Furthermore, drivers change lanes to overtake other vehicles, whereas rapid lane changes are regarded as aggressive driving behaviors that can lead to accidents, such as rear-end collisions. Therefore, this study determined that rapid lane changes are also suitable for the category of aggressive driving detection on two-wheeled vehicles. Meanwhile, we chose to focus on items related to turns and thus excluded various aggressive driving behaviors, such as rapid acceleration, rapid deceleration, and close proximity to pedestrians.
To detect aggressive driving, sensor data extracted from the Carla simulator were chosen with considerations for the feasibility of onboard installation on real two-wheeled vehicles. Image data and information about surrounding vehicles that can be extracted from camera and lidar sensors were excluded from consideration. Instead, acceleration, angular velocity, and position information were utilized.
The Carla simulator is an open-source simulator designed for autonomous driving research [15]. Apart from its application in autonomous driving research, it serves as a valuable platform for generating and modeling trajectory data for two-wheeled vehicles. The simulator recreates various environments and road conditions virtually to simulate the behavior of two-wheeled vehicles. It uses sensors like speed, gyroscope, and acceleration to generate data that resemble real-world driving situations. In this study, various aggressive driving scenarios and trajectory data that would be difficult to collect on actual roads were generated and analyzed using the simulator.

3.1. Rapid Turn

Rapid turns can be detected using gyroscope and acceleration data [16]. While previous research utilized only the z-axis gyroscope to detect rapid turns, this study considers both the Y and z-axis acceleration values in addition to the z-axis gyroscope values to differentiate between stages of rapid turns. Figure 3 illustrates the changes in z-axis gyroscope, y-axis acceleration, and z-axis acceleration values across different stages of rapid turns. As the stage of rapid turning increases, the z-axis gyroscope value changes sharply, and both Y and z-axis acceleration values demonstrate increased fluctuations. As the level of aggressiveness increases, z-axis gyroscope, y-axis acceleration, and z-axis acceleration all exhibit a decrease in the time interval within which a single segment is estimated based on the maximum values. Additionally, these maximum values demonstrate an upward trend. Furthermore, a distinct skew pattern towards the right is observed, which effectively captures the maneuvering patterns associated with rapid turns in two-wheeled vehicles. Finally, the z-axis gyroscope values indicate an inclination effect within the vehicle, particularly noticeable from Stage 3 of aggressiveness onwards, as it returns to regular driving following the completion of a rapid turn. This study defines the criteria for rapid turns based on these observations, as shown in Table 1.

3.2. Rapid Lane Change

Rapid lane changes exhibit a back-and-forth pattern due to the nature of vehicle movement [16], for instance, rapidly changing from a left turn to a right turn within a short time. Figure 4 illustrates the staged rapid lane changes, with gray segments indicating the rapid lane-change periods. It is noticeable that the z-axis gyroscope value oscillates around 0 during rapid lane-change segments. However, there are many cases where the z-axis gyroscope value oscillates but does not correspond to a rapid lane change, and the acceleration value is inconsistent. This indicates the difficulty of establishing a straightforward definition for rapid lane changes due to their complex and non-linear nature. Moreover, rapid lane changes tend to oscillate rapidly from left to right in a short period, making it challenging to define solely based on observation.
For these reasons, the classification of rapid lane changes was conducted, as shown in Figure 5. First, with the input unit in Figure 5, the data processing phase was initiated, which involved transforming the multivariate time-series data—consisting of speed, angular velocity, acceleration, and coordinate information—through feature engineering. The features from the raw data and the results of feature engineering are presented in Table 2. Due to the temporal nature of time-series data, data augmentation is critically important [17]. In this study, techniques such as flipping and window cropping were utilized. Flipping involves horizontally flipping the given time-series data, while window cropping entails extracting portions from specific time intervals and inserting them randomly. For window cropping, a window of 2 s was set in consideration of the characteristics of rapid lane changes. Since the processed data originated from data acquired through surveys, datasets were available for each stage of rapid lane changes. Consequently, when splitting into training and testing datasets, care was taken to ensure that specific classes were not concentrated within certain time periods. The data split ratios for training, testing, and validation were configured as follows: for Stage 1, it was set to 6:2:2; for Stage 2, it was set to 7:1.5:1.5; for Stage 3, it was set to 7:1.5:1.5; and for Stage 4, it was configured as 8:1:1. After segregating the data into stage-specific rapid lane-change datasets, they were consolidated, completing the data preprocessing phase. The processed data underwent normalization, followed by feature selection (feature selection in Figure 5). In this study, feature selection was performed using the feature importance provided by XGBoost. The top 10 features were extracted and can be found in Table 3.
Finally, with CNN-LSTM unit in Figure 5, we proceeded with training a time-series, deep learning model using the 10 extracted features. The model employed in this study is a CNN-LSTM model with LSTM as the backbone and convolution layers [18]. An assessment of different parameter settings for hidden state size (128, 256, 512) and recurrent layer (2, 4, 6) was conducted, which indicated that variations in these parameters primarily affected training time and did not significantly impact the model’s performance metrics. Consequently, the hidden state and recurrent layer of this model were set to 128 and 6, respectively. Layer normalization was applied, and to address class imbalance, we performed balance adjustment in the loss function. Figure 6 and Table 4, respectively, illustrate the performance graphs of the model and the classification results for rapid lane changes. The training was conducted for 11 K epochs, and judging by the accuracy and loss on both the training and validation datasets, we determined that the training proceeded successfully. The overall accuracy for the classification of rapid lane changes is 95%, and the average precision, recall, and f1-score for normal driving and stage-specific rapid lane changes are 0.926, 0.886, and 0.904, respectively.

4. Applications

In this section, we delve into the process of applying and validating the developed model for classifying rapid lane changes and rapid turns in various scenarios. Figure 7 illustrates the utilization of the Carla simulator to apply to the model. For rapid turns, we simulated scenarios occurring at intersections and curved road sections, while for rapid lane changes, validation was performed on straight sections and entrance ramp regions. It is important to note that the driving data used in this context are not from the dataset used for model development but rather a new dataset designed for qualitative validation assessment. In both intersection and straight-section scenarios, representative cases were chosen where all stages of aggressive driving were classified within the same stretch of road.

4.1. Scenario 1: Gentle Curve

Figure 8 represents a scenario corresponding to the (b) region in Figure 7. This scenario pertains to rapid turns on gentle curve roads, representing a notable case where all stages of aggressive driving are detected. Due to the spatial scale of this simulation scenario, not all instances of rapid turns were detected in the context of long, gentle roads. Detection was limited to instances where the vehicle made rapid turns rather than smoothly navigating the curved road during cornering. In all stages, rapid turns were detected near the midpoint of the road’s curve, with normal driving behavior observed before and after. Furthermore, it was observed that as the aggressiveness of the driving increased, a pattern of rapid turns, rather than smooth curves, became more pronounced.

4.2. Scenario 2: Intersection

Figure 9 represents a scenario corresponding to the (c) region in Figure 7. This scenario involves a turn at an intersection, serving as a representative case where all stages of aggressive driving were detected. In this simulated scenario, one can observe distinct differences in the movement patterns of the virtual two-wheeled vehicle before, during, and after the turn. Particularly, as the intensity of aggressive driving increases, the vehicle approaches the corner closer during the turn, and after the turn, there is a noticeable phenomenon where the trajectory of the vehicle converges toward the direction of driving.

4.3. Scenario 3: Entrance Ramp

Figure 10 represents a scenario corresponding to the (d) region in Figure 7. This scenario is designed to simulate the reduction in land count at merging points, such as highway entrances, and the case in Figure 10 represents a prominent example where all stages of aggressive driving are detected. Upon closer examination of this simulation, in Stage 1, the vehicle underwent a rapid lane change just before the merging point, and another lane change occurred after entering the merging zone. In Stage 2, an aggressive land change was executed prior to the entrance zone, and no further lane changes were made after navigating through the curved section. In Stage 3, multiple lane changes were made to secure a larger turning radius before entering the merging zone. In Stage 4, just before entering the merging zone, the vehicle swerved across all lanes, creating a situation where it aggressively squeezed into the merging area without executing lane changes in advance. In practical terms, all of these cases successfully detected the trajectory as lane changes during the curve without any confusion.

4.4. Scenario 4: Straight Section

Figure 11 represents a scenario corresponding to the (e) region in Figure 7. In this scenario, all stages of aggressive lane changes are classified in a representative case on the same straight road segment. A road with a total of five lanes was depicted, and the lane changes at each stage were illustrated. It was observed that lane changes were classified according to the stages in the segment where lane changes were attempted. The visualization simply represents the lane-change patterns, and it can be noted that the stages of rapid lane change are not clearly distinguishable, reaffirming the non-linearity of the aggressive lane-change classification problem.

5. Conclusions

In this study, we propose a method for detecting aggressive driving in two-wheeled vehicles. Our emphasis is on the significance of this research in enhancing the safety of two-wheeled riders and preventing traffic accidents. The key outcomes of our investigation entail the proposal of a model for identifying aggressive driving patterns, such as rapid turns and aggressive rapid lane changes. We derived a subjective concept of aggressiveness through surveys and gathered data on risky driving behavior via simulations. Notably, we explored an approach to detect aggressive lane changes without relying on camera technology, which can be challenging due to the difficulties of installing cameras on two-wheeled vehicles, by relying solely on sensor technology.
Through this research, we illustrate the potential for identifying aggressive driving in two-wheeled vehicles using sensor data. We formulated criteria for rapid turns through observations and delineated patterns for aggressive lane changes through various feature engineering techniques and the CNN-LSTM model. The results of practical application further validated the correspondence with these patterns. We anticipate that these research findings will contribute to the advancement of two-wheeled driver safety and the prevention of traffic accidents.

Author Contributions

Conceptualization, D.K.; methodology, D.K. and H.K.; software, D.K. and H.K.; validation, D.K. and H.K.; formal analysis, D.K.; resources, H.K.; writing—original draft preparation, D.K.; writing—review and editing, D.K.; visualization, D.K.; project administration, C.J.; funding acquisition, C.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by grant number (2022-MOIS41-005) of the Citizen-customized Life Safety Technology Development Program funded by the Ministry of the Interior and Safety (MOIS, Republic of Korea).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to their containing information that could compromise the privacy of the experimental participants and these data need to be protected.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lim, J.; Gang, H. A Study on the Traffic Accident Reduction Countermeasures Using the Two-Wheel Vehicle Control System. Korea Transportation Safety Authority. 2021. Available online: www.kotsa.or.kr (accessed on 1 June 2023).
  2. Lim, J.-B.; Kim, K.-M.; Park, J.-T. Paper An empirical study for the introduction of two-wheeled vehicle control technology and behavior analysis. J. Korea Acad. Coop. Soc. 2022, 23, 343–350. [Google Scholar] [CrossRef]
  3. Zhang, Y.; Shi, X.; Zhang, S.; Abraham, A. A XGBoost-Based Lane Change Prediction on Time Series Data Using Feature Engineering for Autopilot Vehicles. IEEE Trans. Intell. Transp. Syst. 2022, 23, 19187–19200. [Google Scholar] [CrossRef]
  4. Shangguan, Q.; Fu, T.; Wang, J.; Luo, T.; Fang, S. An integrated methodology for real-time driving risk status prediction using naturalistic driving data. Accid. Anal. Prev. 2021, 156, 106122. [Google Scholar] [CrossRef] [PubMed]
  5. Jang, J.-M. Detection of Reckless Driving using Deep Learning. In Proceedings of the 19th IEEE International Conference on Machine Learning and Applications, ICMLA 2020, Miami, FL, USA, 14–17 December 2020; Institute of Electrical and Electronics Engineers Inc.: Piscateville, NJ, USA, 2020; pp. 853–858. [Google Scholar] [CrossRef]
  6. Chen, Z.; Wu, C.; Huang, Z.; Lyu, N.; Hu, Z.; Zhong, M.; Cheng, Y.; Ran, B. Dangerous driving behavior detection using video-extracted vehicle trajectory histograms. J. Intell. Transp. Syst. 2017, 21, 409–421. [Google Scholar] [CrossRef]
  7. Doshi, A.; Trivedi, M.M. Examining the impact of driving style on the predictability and responsiveness of the driver: Real-world and simulator analysis. In Proceedings of the 2010 IEEE Intelligent Vehicles Symposium, La Jolla, CA, USA, 21–24 June 2010; pp. 232–237. [Google Scholar] [CrossRef]
  8. Buhet, T.; Wirbel, E.; Perrotton, X. Conditional Vehicle Trajectories Prediction in CARLA Urban Environment. In Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops, Seoul, Republic of Korea, 27–28 October 2019. [Google Scholar] [CrossRef]
  9. Gao, J.; Murphey, Y.L.; Yi, J.; Zhu, H. A data-driven lane-changing behavior detection system based on sequence learning. Transp. B Transp. Dyn. 2020, 10, 831–848. [Google Scholar] [CrossRef]
  10. Zhang, L.; Yan, L.; Fang, Y.; Fang, X.; Huang, X. A Machine Learning-Based Defensive Alerting System against Reckless Driving in Vehicular Networks. IEEE Trans. Veh. Technol. 2019, 68, 12227–12238. [Google Scholar] [CrossRef]
  11. Woo, H.; Ji, Y.; Kono, H.; Tamura, Y.; Kuroda, Y.; Sugano, T.; Yamamoto, Y.; Yamashita, A.; Asama, H. Lane-Change Detection Based on Vehicle-Trajectory Prediction. IEEE Robot. Autom. Lett. 2017, 2, 1109–1116. [Google Scholar] [CrossRef]
  12. Zhao, C.; Zhao, X.; Li, Z.; Zhang, Q. XGBoost-DNN Mixed Model for Predicting Driver’s Estimation on the Relative Motion States during Lane-Changing Decisions: A Real Driving Study on the Highway. Sustainability 2022, 14, 6829. [Google Scholar] [CrossRef]
  13. Zhao, H.; Zhang, Y.; Meng, P.; Shi, H.; Li, L.E.; Lou, T.; Zhao, J. Safety Score: A Quantitative Approach to Guiding Safety-Aware Autonomous Vehicle Computing System Design. In Proceedings of the 2020 IEEE Intelligent Vehicles Symposium (IV), Las Vegas, NV, USA, 19 October–13 November 2020; Institute of Electrical and Electronics Engineers Inc.: Piscateville, NJ, USA, 2020; pp. 1479–1485. [Google Scholar] [CrossRef]
  14. Eftekhari, H.R.; Ghatee, M. A similarity-based neuro-fuzzy modeling for driving behavior recognition applying fusion of smartphone sensors. J. Intell. Transp. Syst. Technol. Plan. Oper. 2018, 23, 72–83. [Google Scholar] [CrossRef]
  15. Dosovitskiy, A.; Ros, G.; Codevilla, F.; López, A.; Koltun, V. CARLA: An Open Urban Driving Simulator. In Proceedings of the Conference on Robot Learning, Mountain View, CA, USA, 13–15 November 2017; pp. 1–16. [Google Scholar] [CrossRef]
  16. Gao, R.; Sun, F.; Xing, W.; Tao, D.; Fang, J.; Chai, H. CTTE: Customized travel time estimation via mobile crowdsensing. IEEE Trans. Intell. Transp. Syst. 2022, 23, 19335–19347. [Google Scholar] [CrossRef]
  17. Wen, Q.; Sun, L.; Yang, F.; Song, X.; Gao, J.; Wang, X.; Xu, H. Time Series Data Augmentation for Deep Learning: A Survey. In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, Montreal, QC, Canada, 19–27 August 2021; pp. 4653–4660. [Google Scholar] [CrossRef]
  18. Wang, J.; Yu, L.-C.; Lai, K.R.; Zhang, X. Dimensional sentiment analysis using a regional CNN-LSTM model. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, Berlin, Germany, 7–12 August 2016; Association for Computational Linguistics (ACL): Kerrville, TX, USA, 2016; pp. 225–230. [Google Scholar] [CrossRef]
Figure 1. Research framework illustrating the approach to the aggressive driving detection model.
Figure 1. Research framework illustrating the approach to the aggressive driving detection model.
Applsci 13 12475 g001
Figure 2. z-axis gyroscope signals used for rapid turn detection (a) and rapid lane-change detection (b). The data are extracted from survey responses.
Figure 2. z-axis gyroscope signals used for rapid turn detection (a) and rapid lane-change detection (b). The data are extracted from survey responses.
Applsci 13 12475 g002
Figure 3. Inertial sensing data used for rapid turn detection. Data obtained from Carla simulator, and (ad) are the stages (1~4) of rapid turn defined in the survey, respectively.
Figure 3. Inertial sensing data used for rapid turn detection. Data obtained from Carla simulator, and (ad) are the stages (1~4) of rapid turn defined in the survey, respectively.
Applsci 13 12475 g003
Figure 4. Inertial sensing data used for rapid lane-change detection. Data obtained from Carla simulator, and (ad) are the stages (1~4) of rapid lane change defined in the survey, respectively. The gray section is judged to be a rapid lane change in the survey.
Figure 4. Inertial sensing data used for rapid lane-change detection. Data obtained from Carla simulator, and (ad) are the stages (1~4) of rapid lane change defined in the survey, respectively. The gray section is judged to be a rapid lane change in the survey.
Applsci 13 12475 g004
Figure 5. Illustration of rapid lane-change classification process.
Figure 5. Illustration of rapid lane-change classification process.
Applsci 13 12475 g005
Figure 6. Graph representing model accuracy and model loss for training and validation set using the CNN-LSTM approach. The red and black lines are the training and validation set, respectively.
Figure 6. Graph representing model accuracy and model loss for training and validation set using the CNN-LSTM approach. The red and black lines are the training and validation set, respectively.
Applsci 13 12475 g006
Figure 7. The Carla map with the aggressive driving model (a). Application of the aggressive driving model to the gentle curve section (b), the intersection (c), the highway entrance ramp (d), and the straight section (e) scenarios. The solid green and white lines represent the vehicle’s trajectory and the lane, respectively. Each marker in (be) corresponds to a point indicating instances of rapid turn and lane change.
Figure 7. The Carla map with the aggressive driving model (a). Application of the aggressive driving model to the gentle curve section (b), the intersection (c), the highway entrance ramp (d), and the straight section (e) scenarios. The solid green and white lines represent the vehicle’s trajectory and the lane, respectively. Each marker in (be) corresponds to a point indicating instances of rapid turn and lane change.
Applsci 13 12475 g007
Figure 8. Scenario of curved road in Figure 7b region.
Figure 8. Scenario of curved road in Figure 7b region.
Applsci 13 12475 g008
Figure 9. Scenario of intersection in Figure 7c region.
Figure 9. Scenario of intersection in Figure 7c region.
Applsci 13 12475 g009
Figure 10. Scenario of entrance ramp in Figure 7d region.
Figure 10. Scenario of entrance ramp in Figure 7d region.
Applsci 13 12475 g010
Figure 11. Scenario of straight section in Figure 7e region.
Figure 11. Scenario of straight section in Figure 7e region.
Applsci 13 12475 g011
Table 1. Table for rapid turn criteria. S1 to S4 are rapid stages.
Table 1. Table for rapid turn criteria. S1 to S4 are rapid stages.
StageTimeMax. Gyro-Z 1Max. Acc-Y 2Max. Acc-Z 3
Rapid Turn (S1)1 (s)30–50 (deg/s)0–5 (m/s2)-
Rapid Turn (S2)1 (s)50–70 (deg/s)5–10 (m/s2)-
Rapid Turn (S3)1 (s)70–90 (deg/s)10–15 (m/s2)10–12.5 (m/s2)
Rapid Turn (S4)1 (s)90– (deg/s)15– (m/s2)12.5– (m/s2)
1 Max z-axis angular velocity under 1 (s). 2 Max y-axis acceleration values under 1 (s). 3 Max z-axis acceleration values under 1 (s).
Table 2. Table of raw dataset and feature engineering results.
Table 2. Table of raw dataset and feature engineering results.
Raw Dataset
LocationLateral(x) and longitudinal(y) position of the vehicle
GyroscopeAngular velocity (x, y, z-axis) from IMU sensor in Carla simulator
AccelerationAcceleration (x, y, z-axis) from IMU sensor in Carla simulator
SpeedSpeed of the vehicle
Feature Engineering
Jerk, SnapThe second derivative on acceleration and the rate of change of jerk, respectively
Moving time windowMoving windows (Mean, Max, Min) with time intervals of 1 s, 2 s, and 5 s applied to variables
Mean, std, max, min, quantile valuesMeans and standard deviations of variables under different window sizes
AngleAngle of the roads under different window sizes (0.5 s, 1 s, and 2 s)
Percentage changePercentage change in variables (speed, acceleration, and angular velocity)
Log ratioLog ratio on variables (speed, acceleration, and angular velocity)
Table 3. Table for selected features.
Table 3. Table for selected features.
Rank of Feature ImportanceNotation of FeatureDefinition
1acc-ymmax500Moving max acceleration of y-axis with 5 s
2gyro-zmmax100Moving max angular velocity of z-axis with 1 s
3gyro-xmmax100Moving max angular velocity of x-axis with 1 s
4acc-zmmax100Moving max acceleration of z-axis with 1 s
5anglemmax50Moving max angle with 0.5 s
6speedmstd500Moving standard deviation of speed with 5 s
7acc-zmstd500Moving standard deviation acceleration of z-axis with 5 s
8anglemmax100Moving max angle with 1 s
9speedmaMoving average speed under different window sizes
10speedmmax500Moving max speed with 5 s
Table 4. Table for evaluating CNN-LSTM classification model results for rapid lane change. LC is the rapid lane-change class, and S1 to S4 are in that stage.
Table 4. Table for evaluating CNN-LSTM classification model results for rapid lane change. LC is the rapid lane-change class, and S1 to S4 are in that stage.
ClassPrecisionRecallF1-Score
Normal0.970.990.98
LC (S1)0.950.920.93
LC (S2)0.890.800.84
LC (S3)0.970.840.90
LC (S4)0.850.880.87
Accuracy0.95
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kim, D.; Kim, H.; Jun, C. The Detection of Aggressive Driving Patterns in Two-Wheeled Vehicles Using Sensor-Based Approaches. Appl. Sci. 2023, 13, 12475. https://doi.org/10.3390/app132212475

AMA Style

Kim D, Kim H, Jun C. The Detection of Aggressive Driving Patterns in Two-Wheeled Vehicles Using Sensor-Based Approaches. Applied Sciences. 2023; 13(22):12475. https://doi.org/10.3390/app132212475

Chicago/Turabian Style

Kim, Dongbeom, Hyemin Kim, and Chulmin Jun. 2023. "The Detection of Aggressive Driving Patterns in Two-Wheeled Vehicles Using Sensor-Based Approaches" Applied Sciences 13, no. 22: 12475. https://doi.org/10.3390/app132212475

APA Style

Kim, D., Kim, H., & Jun, C. (2023). The Detection of Aggressive Driving Patterns in Two-Wheeled Vehicles Using Sensor-Based Approaches. Applied Sciences, 13(22), 12475. https://doi.org/10.3390/app132212475

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop