Smart Driver Behavior Recognition and 360-Degree Surround-View Camera for Electric Buses
Abstract
:1. Introduction
2. Proposed Method and Experimental Setup
2.1. 360-Degree Surround-View System
2.2. 360-Degree Surround-View System Differences from Related Studies
2.3. Driver Behavior Recognition Based on YOLO
2.4. Driver Behavior Recognition Based on YOLO Differences from Related Studies
3. Performance Results
4. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Cicchino, J.B. Effects of lane departure warning on police-reported crash rates. J. Saf. Res. 2018, 66, 61–70. [Google Scholar] [CrossRef] [PubMed]
- Hidayatullah, M.R.; Juang, J.C. Adaptive Cruise Control with Gain Scheduling Technique under Varying Vehicle Mass. IEEE Access 2021, 9, 144241–144256. [Google Scholar] [CrossRef]
- Wang, Y.; Wang, Z.; Han, K.; Tiwari, P.; Work, D.B. Gaussian Process-Based Personalized Adaptive Cruise Control. IEEE Trans. Intell. Transp. Syst. 2022, 23, 21178–21189. [Google Scholar] [CrossRef]
- Strišković, B.; Vranješ, M.; Vranješ, D.; Popović, M. Recognition of maximal speed limit traffic signs for use in advanced ADAS algorithms. In Proceedings of the 2021 Zooming Innovation in Consumer Technologies Conference (ZINC), Novi Sad, Serbia, 26–27 May 2021; pp. 21–26. [Google Scholar]
- Iranmanesh, S.M.; Mahjoub, H.N.; Kazemi, H.; Fallah, Y.P. An Adaptive Forward Collision Warning Framework Design Based on Driver Distraction. IEEE Trans. Intell. Transp. Syst. 2018, 19, 3925–3934. [Google Scholar] [CrossRef]
- Cicchino, J.B. Effectiveness of forward collision warning and autonomous emergency braking systems in reducing front-to-rear crash rates. Accid. Anal. Prev. 2017, 99, 142–152. [Google Scholar] [CrossRef] [PubMed]
- Lee, J.; Kim, M.; Lee, S.; Hwang, S. Real-Time Downward View Generation of a Vehicle Using Around View Monitor System. IEEE Trans. Intell. Transp. Syst. 2020, 21, 3447–3456. [Google Scholar] [CrossRef]
- Yue, L.; Abdel-Aty, M.A.; Wu, Y.; Farid, A. The Practical Effectiveness of Advanced Driver Assistance Systems at Different Roadway Facilities: System Limitation, Adoption, and Usage. IEEE Trans. Intell. Transp. Syst. 2020, 21, 3859–3870. [Google Scholar] [CrossRef]
- Gojak, V.; Janjatovic, J.; Vukota, N.; Milosevic, M.; Bjelica, M.Z. Informational bird’s eye view system for parking assistance. In Proceedings of the 2017 IEEE 7th International Conference on Consumer Electronics-Berlin (ICCE-Berlin), Berlin, Germany, 3–6 September 2017; pp. 103–104. [Google Scholar]
- Kato, J.; Sekiyama, N. Generating Bird’s Eye View Images Depending on Vehicle Positions by View Interpolation. In Proceedings of the 2008 3rd International Conference on Innovative Computing Information and Control, Dalian, China, 18–20 June 2008; p. 16. [Google Scholar]
- Ananthanarayanan, G.; Bahl, P.; Bodík, P.; Chintalapudi, K.; Philipose, M.; Ravindranath, L.; Sinha, S. Real-Time Video Analytics: The Killer App for Edge Computing. Computer 2017, 50, 58–67. [Google Scholar] [CrossRef]
- Al-Hami, M.; Casas, R.; El-Salhi, S.; Awwad, S.; Hussein, F. Real-Time Bird’s Eye Surround View System: An Embedded Perspective. Appl. Artif. Intell. 2021, 35, 765–781. [Google Scholar] [CrossRef]
- Lowe, D.G. Distinctive Image Features from Scale-Invariant Keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
- Bay, H.; Tuytelaars, T.; Van Gool, L. SURF: Speeded Up Robust Features. In Computer Vision—ECCV 2006; Springer: Berlin/Heidelberg, Germany, 2006; pp. 404–417. [Google Scholar]
- Rublee, E.; Rabaud, V.; Konolige, K.; Bradski, G. ORB: An efficient alternative to SIFT or SURF. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 2564–2571. [Google Scholar]
- Pan, J.; Appia, V.; Villarreal, J.; Weaver, L.; Kwon, D.K. Rear-Stitched View Panorama: A Low-Power Embedded Implementation for Smart Rear-View Mirrors on Vehicles. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Honolulu, HI, USA, 21–26 July 2017; pp. 1184–1193. [Google Scholar]
- J3016_202104; Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles. SAE International: Warrendale, PA, USA, 2018.
- Pugeault, N.; Bowden, R. How Much of Driving Is Preattentive? IEEE Trans. Veh. Technol. 2015, 64, 5424–5438. [Google Scholar] [CrossRef] [Green Version]
- Butakov, V.; Ioannou, P. Personalized Driver/Vehicle Lane Change Models for ADAS. Vehicular Technology. IEEE Trans. Veh. Technol. 2015, 64, 4422–4431. [Google Scholar] [CrossRef]
- Martinez, C.M.; Heucke, M.; Wang, F.Y.; Gao, B.; Cao, D. Driving Style Recognition for Intelligent Vehicle Control and Advanced Driver Assistance: A Survey. IEEE Trans. Intell. Transp. Syst. 2018, 19, 666–676. [Google Scholar] [CrossRef] [Green Version]
- Hu, J.; Xu, L.; He, X.; Meng, W. Abnormal Driving Detection Based on Normalized Driving Behavior. IEEE Trans. Veh. Technol. 2017, 66, 6645–6652. [Google Scholar] [CrossRef]
- Chai, R.; Naik, G.R.; Nguyen, T.N.; Ling, S.H.; Tran, Y.; Craig, A.; Nguyen, H.T. Driver Fatigue Classification With Independent Component by Entropy Rate Bound Minimization Analysis in an EEG-Based System. IEEE J. Biomed. Health Inform. 2017, 21, 715–724. [Google Scholar] [CrossRef] [PubMed]
- Martin, M.; Voit, M.; Stiefelhagen, R. Dynamic Interaction Graphs for Driver Activity Recognition. In Proceedings of the 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC), Rhodes, Greece, 20–23 September 2020; pp. 1–7. [Google Scholar]
- Nel, F.; Ngxande, M. Driver Activity Recognition Through Deep Learning. In Proceedings of the 2021 Southern African Universities Power Engineering Conference/Robotics and Mechatronics/Pattern Recognition Association of South Africa (SAUPEC/RobMech/PRASA), Potchefstroom, South Africa, 27–29 January 2021; pp. 1–6. [Google Scholar]
- Xing, Y.; Lv, C.; Wang, H.; Cao, D.; Velenis, E.; Wang, F. Driver Activity Recognition for Intelligent Vehicles: A Deep Learning Approach. IEEE Trans. Veh. Technol. 2019, 68, 5379–5390. [Google Scholar] [CrossRef] [Green Version]
- Xing, Y.; Lv, C.; Zhang, Z.; Wang, H.; Na, X.; Cao, D.; Velenis, E.; Wang, F.Y. Identification and Analysis of Driver Postures for In-Vehicle Driving Activities and Secondary Tasks Recognition. IEEE Trans. Comput. Soc. Syst. 2018, 5, 95–108. [Google Scholar] [CrossRef] [Green Version]
- Halabi, O.; Fawal, S.; Almughani, E.; Al-Homsi, L. Driver activity recognition in virtual reality driving simulation. In Proceedings of the 2017 8th International Conference on Information and Communication Systems (ICICS), Irbid, Jordan, 4–6 April 2017; pp. 111–115. [Google Scholar]
- Behera, A.; Wharton, Z.; Keidel, A.; Debnath, B. Deep CNN, Body Pose, and Body-Object Interaction Features for Drivers’ Activity Monitoring. IEEE Trans. Intell. Transp. Syst. 2022, 23, 2874–2881. [Google Scholar] [CrossRef]
- Zhao, L.; Yang, F.; Bu, L.; Han, S.; Zhang, G.; Luo, Y. Driver behavior detection via adaptive spatial attention mechanism. Adv. Eng. Inform. 2021, 48, 101280. [Google Scholar] [CrossRef]
- Yan, J.; Lei, Z.; Wen, L.; Li, S.Z. The Fastest Deformable Part Model for Object Detection. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 2497–2504. [Google Scholar]
- Lenc, K.; Vedaldi, A. R-cnn minus r. arXiv 2015, arXiv:1506.06981. [Google Scholar]
- Girshick, R. Fast r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 1440–1448. [Google Scholar]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster r-cnn: Towards real-time object detection with region proposal networks. Adv. Neural Inf. Process. Syst. 2015, 28. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
- Bandyopadhyay, H. YOLO: Real-Time Object Detection Explained. 2022. Available online: https://www.v7labs.com/blog/yolo-object-detection#two-stagedetectors (accessed on 20 April 2023).
- Szeliski, R. Stereo Vision: Introduction and Overview. In Computer Vision: Algorithms and Applications; Springer: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
- Zhu, X.; Du, X.; Zhang, T.; Wei, Y. 360-Degree Surround View System Based on Super Resolution Convolutional Neural Network. J. Phys. Conf. Ser. 2020, 1621, 012041. [Google Scholar]
- Hong, S.; Lee, J.; Lee, D.; Kim, M. An improved 360-degree surround view system using multiple fish-eye cameras. Sensors 2015, 15, 31614–31634. [Google Scholar]
- Terven, J.; Cordova-Esparza, D.-M. A Comprehensive Review of YOLO: From YOLOv1 to YOLOv8 and Beyond. arXiv 2023, arXiv:2304.00501. [Google Scholar]
- Çetinkaya, M.; Acarman, T. Driver Activity Recognition Using Deep Learning and Human Pose Estimation. In Proceedings of the 2021 International Conference on INnovations in Intelligent SysTems and Applications (INISTA), Kocaeli, Turkey, 25–27 August 2021. [Google Scholar]
- Smith, J. State Farm Distracted Driver Detection. Kaggle. 2016. Available online: https://www.kaggle.com/c/state-farm-distracted-driver-detection (accessed on 20 April 2023).
- Redmon, J.; Farhadi, A. YOLOv3: An incremental improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
- Mittal, A.; Moorthy, A.K.; Bovik, A.C. Blind/Referenceless Image Spatial Quality Evaluator. In Proceedings of the 2011 Conference Record of the Forty Fifth Asilomar Conference on Signals, Systems and Computers (ASILOMAR), Pacific Grove, CA, USA, 6–9 November 2011; pp. 723–727. [Google Scholar]
- Triki, N.; Karray, M.; Ksantini, M. A Real-Time Traffic Sign Recognition Method Using a New Attention-Based Deep Convolutional Neural Network for Smart Vehicles. Appl. Sci. 2023, 13, 4793. [Google Scholar] [CrossRef]
Detection Frameworks | Train | mAP | fps |
---|---|---|---|
Fastest DPM [30] | VOC 2007 | 30.4 | 15 |
R-CNN Minus R [31] | VOC 2007 | 53.5 | 6 |
Fast R-CNN [32] | VOC 2007 + 2012 | 70.0 | 0.5 |
Faster R-CNN VGG-16 [33] | VOC 2007 + 2012 | 73.2 | 7 |
Faster R-CNN ZF [33] | VOC 2007 + 2012 | 62.1 | 18 |
YOLO VGG-16 [34] | VOC 2007 + 2012 | 66.4 | 21 |
YOLO [35] | VOC 2007 + 2012 | 63.4 | 45 |
Camera Horizontal Distances | ||
Cabin Width | : | 2512 mm |
Cabin Height | : | 8953 mm |
Distance Of Rear Camera To Front Camera | : | 8953 mm |
Distance Of Rear Camera To Side Camera | : | x: 1189 mm y: 2719 mm |
Distance Of Side Camera To Side Camera | : | 2512 mm |
Distance Of Side Camera To Front Camera | : | x: 1269 mm y: 6233 mm |
Camera Vertical Distances | ||
Front Camera Ground Distance | : | 3290 mm |
Side Camera Ground Distance | : | 1428 mm |
Rear Camera Ground Distance | : | 855 mm |
Outer Camera Degrees | ||
Camera Lateral Angle | : | 185° |
Camera Vertical Angle | : | 142° |
Inner Camera Degrees | ||
Camera Lateral Angle | : | 180° |
Camera Vertical Angle | : | 135° |
ret, K1, D1, K2, D2, R, T, E, F = cv2.stereoCalibrate(objp, leftp, rightp, K1, D1, K2, D2, image_size, criteria, flag CV_CALIB_FIX_INTRINSIC: CV_CALIB_USE_INTRINSIC_GUESS: CV_CALIB_FIX_PRINCIPAL_POINT: CV_CALIB_FIX_FOCAL_LENGTH: CV_CALIB_FIX_ASPECT_RATIO: CV_CALIB_SAME_FOCAL_LENGTH: CV_CALIB_ZERO_TANGENT_DIST: CV_CALIB_FIX_K1,…,CV_CALIB_FIX_K6: R1, R2, P1, P2, Q, roi_left, roi_right = cv2.stereoRectify(K1, D1, K2, D2, image_size, R, T, flags=cv2.CALIB_ZERO_DISPARITY, alpha=0.9) LeftMapX, leftMapY = cv2.initUndistortRectifyMap(K1, D1, R1, P1, (width, height), cv2.CV_32FC1) left_rectified = cv2.remap(leftFrame, leftMapX, leftMapY, cv2.INTER_LINEAR, cv2.BORDER_CONSTANT) rightMapX, rightMapY = cv2.initUndistortRectifyMap(K2, D2, R2, P2, (width, height), cv2.CV_32FC1) right_rectified = cv2.remap(rightFrame, rightMapX, rightMapY, cv2.INTER_LINEAR, cv2.BORDER_CONSTANT) |
%wraping cv::Point2f srcPointsF[] = {Point(x1,y1),Point(x2,y2),Point(x3,y3),Point(x4,y4)}; cv::Point2f dstPointsF[] = {Point(z1,x1),Point(z2,x2),Point(z3,x3),Point(z4,x4)}; Mat F = getPerspectiveTransform(srcPointsF, dstPointsF); |
%masking cv::Mat maskR = cv::Mat::zeros(cv::Size(width, height), CV_8U); vector<Point> pts = {Point(x1,y1),Point(x2,y2),Point(x3,y3),Point(x4,y4)}; fillPoly(maskR,pts,Scalar(255)); cvtColor(maskR, maskR, COLOR_GRAY2BGR); cv::Mat resR; bitwise_and(frame,maskR,resR); |
Databases: | Driver Behaviors | |
---|---|---|
Instances: | 3099 | |
Attributes: | 10 | |
Sum of Weights: | 286 | |
No | Attributes | Type |
1 | C0_smoking | Nominal |
2 | C1_talking_passenger | Nominal |
3 | C2_radio_checking | Nominal |
4 | C3_reaching_behind | Nominal |
5 | C4_drinking | Nominal |
6 | C5_texting | Nominal |
7 | C6_talking_on_phone | Nominal |
Original Surround View | Modified Surround View |
---|---|
24.827 | 56.108 |
28.592 | 46.661 |
18.911 | 48.012 |
25.743 | 53.881 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Cuma, M.U.; Dükünlü, Ç.; Yirik, E. Smart Driver Behavior Recognition and 360-Degree Surround-View Camera for Electric Buses. Electronics 2023, 12, 2979. https://doi.org/10.3390/electronics12132979
Cuma MU, Dükünlü Ç, Yirik E. Smart Driver Behavior Recognition and 360-Degree Surround-View Camera for Electric Buses. Electronics. 2023; 12(13):2979. https://doi.org/10.3390/electronics12132979
Chicago/Turabian StyleCuma, Mehmet Uğraş, Çağrı Dükünlü, and Emrah Yirik. 2023. "Smart Driver Behavior Recognition and 360-Degree Surround-View Camera for Electric Buses" Electronics 12, no. 13: 2979. https://doi.org/10.3390/electronics12132979
APA StyleCuma, M. U., Dükünlü, Ç., & Yirik, E. (2023). Smart Driver Behavior Recognition and 360-Degree Surround-View Camera for Electric Buses. Electronics, 12(13), 2979. https://doi.org/10.3390/electronics12132979