A Lightweight Attention-Based Network towards Distracted Driving Behavior Recognition
Abstract
:1. Introduction
- We propose a lightweight Inverted Residual Attention Module (IRAM) towards the problem that the current lightweight network has relatively low accuracy. IRAM effectively improves the classification accuracy with almost no increase of trainable parameters and computation cost.
- We embed the depthwise separable convolution into classic VGG16 and optimize the network structure. Regarding the problem that classic CNN cannot be implemented on edge devices due to the high model complexity, LWANet has very few trainable parameters and model size.
- We utilize subtract mean filter in image preprocessing to further improve the model’s environmental adaptivity.
- Compared with existing state-of-the-art deep learning-based methods, the proposed LWANet has fewer parameters and can be applied to various embedded real-time detection scenarios.
2. Related Work
3. Methods
3.1. Subtract Mean Filter
3.2. Depthwise Separable Convolution
3.3. Proposed Inverted Residual Attention Module (IRAM)
3.4. Proposed LWANet Network Structure
4. Experimental Results
4.1. Dataset
4.2. Implementation Details
4.3. Image Preprocessing and Baseline Selection
4.4. Performance Evaluation of LWANet
4.4.1. Overall Model Performance
4.4.2. Model Real-Time Performance
4.4.3. Ablation Study
4.4.4. Model Comparison and Discussion
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- The World Health Organization. Global Status Report on Road Safety. 2018. Available online: https://www.who.int/publications/i/item/9789241565684 (accessed on 11 March 2021).
- National Highway Traffic Safety Administration. Traffic Safety Facts. 2018. Available online: https://crashstats.nhtsa.dot.gov/Api/Public/ViewPublication/812806 (accessed on 11 March 2021).
- Koesdwiady, A.; Soua, R.; Karray, F.; Kamel, M.S. Recent Trends in Driver Safety Monitoring Systems: State of the Art and Challenges. IEEE Trans. Veh. Technol. 2016, 66, 4550–4563. [Google Scholar] [CrossRef]
- Regan, M.A.; Hallett, C.; Gordon, C.P. Driver distraction and driver inattention: Definition, relationship and taxonomy. Accid. Anal. Prev. 2011, 43, 1771–1781. [Google Scholar] [CrossRef] [PubMed]
- Sahayadhas, A.; Sundaraj, K.; Murugappan, M.; Palaniappan, R. A physiological measures-based method for detecting inattention in drivers using machine learning approach. Biocybern. Biomed. Eng. 2015, 35, 198–205. [Google Scholar] [CrossRef]
- Wang, Y.; Jung, T.-P.; Lin, C.-T. EEG-Based Attention Tracking During Distracted Driving. IEEE Trans. Neural Syst. Rehabil. Eng. 2015, 23, 1085–1094. [Google Scholar] [CrossRef]
- Omerustaoglu, F.; Sakar, C.O.; Kar, G. Distracted driver detection by combining in-vehicle and image data using deep learning. Appl. Soft Comput. 2020, 96, 106657. [Google Scholar] [CrossRef]
- Li, Y.; Li, J.; Jiang, X.; Gao, C.; Zhang, T. A Driving Attention Detection Method Based on Head Pose. In Proceedings of the 2019 IEEE SmartWorld, Ubiquitous Intelligence & Computing, Advanced & Trusted Computing, Scalable Computing & Communications, Cloud & Big Data Computing, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI), Leicester, UK, 19–23 August 2019; pp. 483–490. [Google Scholar] [CrossRef]
- Masood, S.; Rai, A.; Aggarwal, A.; Doja, M.; Ahmad, M. Detecting distraction of drivers using Convolutional Neural Network. Pattern Recognit. Lett. 2018, 139, 79–85. [Google Scholar] [CrossRef]
- Abouelnaga, Y.; Eraqi, H.M.; Moustafa, M.N. Real-time distracted driver posture classification. arXiv 2017, arXiv:1706.09498. [Google Scholar] [CrossRef]
- Dhakate, K.R.; Dash, R. Distracted Driver Detection using Stacking Ensemble. In Proceedings of the 2020 IEEE International Students’ Conference on Electrical, Electronics and Computer Science (SCEECS), Bhopal, India, 22–23 February 2020; pp. 1–5. [Google Scholar] [CrossRef]
- Xing, Y.; Lv, C.; Wang, H.; Cao, D.; Velenis, E.; Wang, F.-Y. Driver Activity Recognition for Intelligent Vehicles: A Deep Learning Approach. IEEE Trans. Veh. Technol. 2019, 68, 5379–5390. [Google Scholar] [CrossRef] [Green Version]
- Huang, C.; Wang, X.; Cao, J.; Wang, S.; Zhang, Y. HCF: A Hybrid CNN Framework for Behavior Detection of Distracted Drivers. IEEE Access 2020, 8, 109335–109349. [Google Scholar] [CrossRef]
- Mase, J.M.; Chapman, P.; Figueredo, G.P.; Torres, M.T. A Hybrid Deep Learning Approach for Driver Distraction Detection. In Proceedings of the 2020 International Conference on Information and Communication Technology Convergence (ICTC), Jeju, Korea, 21–23 October 2020; pp. 1–6. [Google Scholar] [CrossRef]
- Tang, M.; Wu, F.; Zhao, L.-L.; Liang, Q.-P.; Lin, J.-W.; Zhao, Y.-B. Detection of Distracted Driving Based on MultiGranularity and Middle-Level Features. In Proceedings of the 2020 Chinese Automation Congress (CAC), Shanghai, China, 6–8 November 2020; pp. 2717–2722. [Google Scholar] [CrossRef]
- Hu, Y.; Lu, M.; Lu, X. Driving behaviour recognition from still images by using multi-stream fusion CNN. Mach. Vis. Appl. 2018, 30, 851–865. [Google Scholar] [CrossRef]
- Lu, M.; Hu, Y.; Lu, X. Driver action recognition using deformable and dilated faster R-CNN with optimized region proposals. Appl. Intell. 2019, 50, 1100–1111. [Google Scholar] [CrossRef]
- Rao, X.; Lin, F.; Chen, Z.; Zhao, J. Distracted driving recognition method based on deep convolutional neural network. J. Ambient Intell. Humaniz. Comput. 2019, 12, 193–200. [Google Scholar] [CrossRef]
- Han, S.; Mao, H.; Dally, W.J. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv 2015, arXiv:1510.00149. [Google Scholar] [CrossRef]
- Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv 2017, arXiv:1704.04861. [Google Scholar] [CrossRef]
- Yang, Z.; Ma, X.; An, J. Asymmetric Convolution Networks Based on Multi-feature Fusion for Object Detection. In Proceedings of the 2020 IEEE 16th International Conference on Automation Science and Engineering (CASE), Hong Kong, China, 20–21 August 2020; pp. 1355–1360. [Google Scholar] [CrossRef]
- Chen, Y.; Fan, H.; Xu, B.; Yan, Z.; Kalantidis, Y.; Rohrbach, M.; Shuicheng, Y.; Feng, J. Drop an Octave: Reducing Spatial Redundancy in Convolutional Neural Networks with Octave Convolution. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Korea, 27 October–2 November 2019. [Google Scholar] [CrossRef] [Green Version]
- Henderson, J.M.; Hayes, T.R. Meaning-based guidance of attention in scenes as revealed by meaning maps. Nat. Hum. Behav. 2017, 1, 743–747. [Google Scholar] [CrossRef] [PubMed]
- Zhang, B.; Xiong, D.; Su, J. Neural Machine Translation with Deep Attention. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 42, 154–163. [Google Scholar] [CrossRef]
- Nguyen, M.T.; Siritanawan, P.; Kotani, K. Saliency detection in human crowd images of different density levels using attention mechanism. Signal Process. Image Commun. 2020, 88, 115976. [Google Scholar] [CrossRef]
- Deng, Z.; Jiang, Z.; Lan, R.; Huang, W.; Luo, X. Image captioning using DenseNet network and adaptive attention. Signal Process. Image Commun. 2020, 85, 115836. [Google Scholar] [CrossRef]
- Liu, Y.; Wang, W.; Hu, Y.; Hao, J.; Chen, X.; Gao, Y. Multi-Agent Game Abstraction via Graph Attention Neural Network. Proc. Conf. AAAI Artif. Intell. 2020, 34, 7211–7218. [Google Scholar] [CrossRef]
- Hu, J.; Shen, L.; Sun, G. Squeeze-and-Excitation Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 42, 2011–2023. [Google Scholar] [CrossRef] [Green Version]
- Woo, S.; Park, J.; Lee, J.-Y.; Kweon, I.S. Cbam: Convolutional block attention module. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar]
- Jin, B.; Xu, Z. EAC-Net: Efficient and Accurate Convolutional Network for Video Recognition. Proc. Conf. AAAI Artif. Intell. 2020, 34, 11149–11156. [Google Scholar] [CrossRef]
- Misra, D.; Nalamada, T.; Arasanipalai, A.U.; Hou, Q. Rotate to attend: Convolutional triplet attention module. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–8 January 2021; pp. 3139–3148. [Google Scholar]
- He, Z.; He, D. Bilinear Squeeze-and-Excitation Network for Fine-Grained Classification of Tree Species. IEEE Geosci. Remote Sens. Lett. 2020, 18, 1139–1143. [Google Scholar] [CrossRef]
- Xie, L.; Huang, C. A Residual Network of Water Scene Recognition Based on Optimized Inception Module and Convolutional Block Attention Module. In Proceedings of the 2019 6th International Conference on Systems and Informatics (ICSAI), Shanghai, China, 2–4 November 2019; pp. 1174–1178. [Google Scholar] [CrossRef]
- Chen, Y.; Zhang, X.; Chen, W.; Li, Y.; Wang, J. Research on Recognition of Fly Species Based on Improved RetinaNet and CBAM. IEEE Access 2020, 8, 102907–102919. [Google Scholar] [CrossRef]
- Wang, H.; Wang, S.; Qin, Z.; Zhang, Y.; Li, R.; Xia, Y. Triple attention learning for classification of 14 thoracic diseases using chest radiography. Med. Image Anal. 2020, 67, 101846. [Google Scholar] [CrossRef] [PubMed]
- Pande, S.; Banerjee, B. Adaptive hybrid attention network for hyperspectral image classification. Pattern Recognit. Lett. 2021, 144, 6–12. [Google Scholar] [CrossRef]
- Wang, Q.; Wang, J.; Zhou, M.; Li, Q.; Wen, Y.; Chu, J. A 3D attention networks for classification of white blood cells from microscopy hyperspectral images. Opt. Laser Technol. 2021, 139, 106931. [Google Scholar] [CrossRef]
- Hu, Y.; Lu, M.; Lu, X. Feature refinement for image-based driver action recognition via multi-scale attention convolutional neural network. Signal Processing Image Commun. 2020, 81, 115697. [Google Scholar] [CrossRef]
- Wang, W.; Lu, X.; Zhang, P.; Xie, H.; Zeng, W. Driver Action Recognition Based on Attention Mechanism. In Proceedings of the 2019 6th International Conference on Systems and Informatics (ICSAI), Shanghai, China, 2–4 November 2019; pp. 1255–1259. [Google Scholar] [CrossRef]
- Jegham, I.; Ben Khalifa, A.; Alouani, I.; Mahjoub, M.A. Soft Spatial Attention-Based Multimodal Driver Action Recognition Using Deep Learning. IEEE Sens. J. 2020, 21, 1918–1925. [Google Scholar] [CrossRef]
- Kuan, D.T.; Sawchuk, A.A.; Strand, T.C.; Chavel, P. Adaptive Noise Smoothing Filter for Images with Signal-Dependent Noise. IEEE Trans. Pattern Anal. Mach. Intell. 1985, 7, 165–177. [Google Scholar] [CrossRef] [PubMed]
- Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.-C. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4510–4520. [Google Scholar]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar] [CrossRef]
- State Farm. State Farm Distracted Driver Detection Dataset. 2016. Available online: https://www.kaggle.com/c/state-farm-distracted-driver-detection/overview (accessed on 11 March 2021).
- Eraqi, H.M.; Abouelnaga, Y.; Saad, M.H.; Moustafa, M.N. Driver Distraction Identification with an Ensemble of Convolutional Neural Networks. J. Adv. Transp. 2019, 2019, 1–12. [Google Scholar] [CrossRef]
- Hou, R.; Zhao, Y.; Hu, Y.; Liu, H. No-reference video quality evaluation by a deep transfer CNN architecture. Signal Process. Image Commun. 2020, 83, 115782. [Google Scholar] [CrossRef]
- Zhang, B. Apply and Compare Different Classical Image Classification Method: Detect Distracted Driver; Stanford CS 229 Project Reports; 2016. Available online: http://merrin5.mdpi.lab/public/tools/acs_final_check (accessed on 11 March 2021).
- Okon, O.D.; Meng, L. Detecting Distracted Driving with Deep Learning. In Proceedings of the International Conference on Interactive Collaborative Robotics, Hatfield, UK, 12–16 September 2017; Springer: Berlin/Heidelberg, Germany; Volume 10459, pp. 170–179. [Google Scholar] [CrossRef]
- Hssayeni, M.D.; Saxena, S.; Ptucha, R.; Savakis, A. Distracted Driver Detection: Deep Learning vs Handcrafted Features. IS&T Int. Symp. Electron. Imaging 2017, 29, 20–26. [Google Scholar] [CrossRef]
- Behera, A.; Keidel, A.H. Latent Body-Pose guided DenseNet for Recognizing Driver’s Fine-grained Secondary Activities. In Proceedings of the 2018 15th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), Auckland, New Zealand, 27–30 November 2018; pp. 1–6. [Google Scholar] [CrossRef]
- Baheti, B.; Gajre, S.; Talbar, S. Detection of Distracted Driver Using Convolutional Neural Network. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Salt Lake City, UT, USA, 18–23 June 2018. [Google Scholar]
- Ai, Y.; Xia, J.; She, K.; Long, Q. Double Attention Convolutional Neural Network for Driver Action Recognition. In Proceedings of the 3rd International Conference on Electronic Information Technology and Computer Engineering (EITCE), Xiamen, China, 18–20 October 2019; pp. 1515–1519. [Google Scholar] [CrossRef]
- Jamsheed, A.; Janet, B.; Reddy, U.S. Real Time Detection of driver distraction using CNN. In Proceedings of the 2020 Third International Conference on Smart Systems and Inventive Technology (ICSSIT), Tirunelveli, India, 20–22 August 2020; pp. 185–191. [Google Scholar] [CrossRef]
- Baheti, B.V.; Talbar, S.; Gajre, S. Towards Computationally Efficient and Realtime Distracted Driver Detection with MobileVGG Network. IEEE Trans. Intell. Veh. 2020, 5, 565–574. [Google Scholar] [CrossRef]
Layer Name | Input Shape | Output Shape | Filter Shape |
---|---|---|---|
Conv_1 | 120 × 120 × 3 | 120 × 120 × 32 | 3 × 3 × 32 |
IRAM_1 | 120 × 120 × 32 | 120 × 120 × 32 | 1 × 1 × 32 × 3 3 × 3 × 96 1 × 1 × 96 × 32 3 × 3 × 32 |
DSWC_1 | 120 × 120 × 32 | 120 × 120 × 32 | 3 × 3 × 32 1 × 1 × 32 × 32 |
Maxpool_1 | 120 × 120 × 32 | 60 × 60 × 32 | |
Conv_2 | 60 × 60 × 32 | 60 × 60 × 32 | 3 × 3 × 32 |
Maxpool_2 | 30 × 30 × 32 | 30 × 30 × 32 | |
Conv_3 | 30 × 30 × 32 | 30 × 30 × 64 | 3 × 3 × 64 |
IRAM_2 | 30 × 30 × 64 | 30 × 30 × 64 | 1 × 1 × 64 × 2 3 × 3 × 128 1 × 1 × 128 × 64 3 × 3 × 64 |
DSWC_2 | 30 × 30 × 64 | 30 × 30 × 64 | 3 × 3 × 64 1 × 1 × 64 × 64 |
Maxpool_3 | 30 × 30 × 64 | 15 × 15 × 64 | |
Conv_4 | 15 × 15 × 64 | 15 × 15 × 64 | 3 × 3 × 64 |
Maxpool_4 | 15 × 15 × 64 | 8 × 8 × 64 | |
Conv_5 | 8 × 8 × 64 | 8 × 8 × 128 | 3 × 3 × 128 |
Maxpool_5 | 8 × 8 × 128 | 4 × 4 × 128 | |
Fc_1 | 1 × 1 × 2048 | 512 | |
Fc_2 | 512 | 10 |
Item | SF3D Dataset | AUC2D Dataset | ||
---|---|---|---|---|
Accuracy | Loss | Accuracy | Loss | |
LWANet without subtract mean filter | 97.77 ± 1.53% | 0.120 ± 0.067 | 95.76 ± 2.44% | 0.216 ± 0.151 |
LWANet with subtract mean filter | 99.37 ± 0.22% | 0.026 ± 0.007 | 98.45 ± 0.28% | 0.089 ± 0.024 |
Classes | SF3D Dataset | AUC2D Dataset | ||||
---|---|---|---|---|---|---|
Total | Correct | Accuracy | Total | Correct | Accuracy | |
Safe driving | 771 | 763 | 98.96% | 554 | 544 | 98.19% |
Texting-right | 655 | 654 | 99.85% | 364 | 364 | 100.00% |
Phone-right | 702 | 699 | 99.57% | 242 | 242 | 100.00% |
Texting-left | 707 | 705 | 99.72% | 209 | 207 | 99.04% |
Phone-left | 697 | 695 | 99.71% | 265 | 263 | 99.25% |
Radio | 684 | 680 | 99.42% | 222 | 222 | 100.00% |
Drinking | 696 | 695 | 99.86% | 218 | 206 | 94.50% |
Reaching behind | 586 | 581 | 99.15% | 211 | 208 | 98.58% |
Hair and makeup | 575 | 566 | 98.43% | 225 | 220 | 97.78% |
Talking to passenger | 627 | 620 | 98.88% | 390 | 379 | 97.18% |
Items | LWANet |
---|---|
GPU Processing Speed | 1485 ± 83 FPS |
Android Phone Processing Speed | 10.77 ± 1.35 FPS |
Items | Standard VGG16 | Lightweight VGG without IRAM | LWANet |
---|---|---|---|
FLOPs | 455,867,390 | 7666,141 | 7715,347 |
Trainable Parameters | 65,120,350 | 1199,626 | 1224,208 |
Model File Size | 248 MB | 4.58 MB | 4.68 MB |
Accuracy on SF3D | 99.39 ± 0.13% | 98.87 ± 0.26% | 99.37 ± 0.22% |
Accuracy on AUC2D | 98.58 ± 0.24% | 97.32 ± 0.58% | 98.45 ± 0.28% |
GPU Processing Speed | 851 ± 75 FPS | 1594 ± 92 FPS | 1485 ± 83 FPS |
Android Phone Processing Speed | 0.67 ± 0.41 FPS | 10.98 ± 1.27 FPS | 10.77 ± 1.35 FPS |
Items | Parallel | Series-Spatial First | Series-Channel First |
---|---|---|---|
Accuracy on SF3D | 99.31 ± 0.05% | 99.04 ± 0.32% | 99.37 ± 0.22% |
Accuracy on AUC2D | 98.11 ± 0.34% | 97.70 ± 0.52% | 98.45 ± 0.28% |
Author | Year | Network Baseline | Attention Module | Involved Dataset | Accuracy | Trainable Parameters |
---|---|---|---|---|---|---|
Zhang et al. [47] | 2016 | VGG16 | No | SF3D | 90.2% | 140 M |
VGG-GAP | 91.3% | |||||
Ensemble VGG16 and VGG-GAP | 92.6% | >140 M | ||||
Okon et al. [48] | 2017 | AlexNet + Softmax | No | SF3D | 96.8% | 63.2 M |
AlexNet + Triplet Loss | 98.7% | |||||
Hssayeni et al. [49] | 2017 | ResNet | No | SF3D | 85% | 60 M |
Abouelnaga et al. [10] | 2018 | AlexNet | No | AUC2D | 93.65% | 62 M |
Inception V3 | 95.17% | 24 M | ||||
Majority Voting Ensemble | 95.77% | 120 M | ||||
GA-Weighted Ensemble | 95.98% | 120 M | ||||
Behera et al. [50] | 2018 | DenseNet | No | AUC2D | 94.2% | 8.06 M |
Baheti et al. [51] | 2018 | VGG16 | No | AUC2D | 94.44% | 140 M |
VGG16 + Regularization | 96.31% | 140 M | ||||
Modified VGG16 | 95.54% | 15 M | ||||
Hu et al. [16] | 2018 | VGG16 | No | SF3D | 86.6% | 33.56 M |
AUC2D | 93.2% | |||||
Ai et al. [52] | 2019 | VGG16-one attention | Yes | AUC2D | 84.82% | >140 M |
VGG16-two-way attention | 87.74% | >140 M | ||||
Janet et al. [53] | 2019 | Vanilla CNN | No | SF3D | 97.05% | 26.05 M |
Wang et al. [39] | 2019 | VGG16 | Yes | SF3D | 88.67% | >65.12 M |
Resnet50 | 92.45% | >46.16 M | ||||
Xing et al. [12] | 2019 | AlexNet | No | Self-Collected | 91.4% | 59.97 M |
GoogLeNet | 87.5% | 6.8 M | ||||
ResNet50 | 83.0% | 46.16 M | ||||
Huang et al. [13] | 2020 | ResNet 50 + Inception V3 + Xception | No | SF3D | 96.74% | >72.3 M |
Dhakate et al. [11] | 2020 | VGG16 | No | SF3D | 58.3% | 140 M |
VGG19 | 55.7% | 142 M | ||||
Inception V3 | 92.9% | 25.6 M | ||||
Xception | 82.5% | 22.9 M | ||||
Inception V3 + Xception | 90% | 46.7 M | ||||
Inception V3 + ResNet50 + Xception + VGG19 | 97% | 214.3 M | ||||
Lu et al. [17] | 2020 | Faster R-CNN | Yes | SF3D | 86.0% | 6.53 M |
SEU | 92.1% | |||||
Baheti et al. [54] | 2020 | VGG16 | No | SF3D | 99.75% | 2.2 M |
SEU | 95.24% | |||||
Hu et al. [38] | 2020 | Multi-scale CNN | Yes | R-DA | 94.0% | 44.06 M |
SF3D | 96.7% | |||||
S-DA | 91.8% | |||||
Jegham et al. [40] | 2021 | VGG16 | Yes | MDAD | 75% | >65.12 M |
LWANet | 2021 | VGG16 | Yes | SF3D | 99.37 ± 0.22% | 1.22 M |
AUC2D | 98.45 ± 0.28% |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Lin, Y.; Cao, D.; Fu, Z.; Huang, Y.; Song, Y. A Lightweight Attention-Based Network towards Distracted Driving Behavior Recognition. Appl. Sci. 2022, 12, 4191. https://doi.org/10.3390/app12094191
Lin Y, Cao D, Fu Z, Huang Y, Song Y. A Lightweight Attention-Based Network towards Distracted Driving Behavior Recognition. Applied Sciences. 2022; 12(9):4191. https://doi.org/10.3390/app12094191
Chicago/Turabian StyleLin, Yingcheng, Dingxin Cao, Zanhao Fu, Yanmei Huang, and Yanyi Song. 2022. "A Lightweight Attention-Based Network towards Distracted Driving Behavior Recognition" Applied Sciences 12, no. 9: 4191. https://doi.org/10.3390/app12094191
APA StyleLin, Y., Cao, D., Fu, Z., Huang, Y., & Song, Y. (2022). A Lightweight Attention-Based Network towards Distracted Driving Behavior Recognition. Applied Sciences, 12(9), 4191. https://doi.org/10.3390/app12094191