Explaining Deep Learning-Based Driver Models
Abstract
:1. Introduction
2. Background and Related Work
2.1. Driver Behavior and Distraction Recognition
Driver Emotion Recognition
2.2. Explainable Artificial Intelligence (XAI)
2.3. XAI and Automotive Environment
2.4. Exploring XAI Techniques in the Driver Behavior Modeling
- LIME [54] is a technique that uses local linear approaches. It provides a specific explanation for each prediction that is accurate locally but does not have to be globally accurate. LIME samples instances close to the one that requires explaining (that is, with small variations in their attributes), and, depending on the class resulting from the sampled instance, a linear approximation that provides a simple explanation is given. In this explanation, each attribute either has a positive weight for a class, in which case it contributes to the classification of that class, or it has a negative weight. In the case of image recognition, the image is divided into super-pixels, and those that contribute positively to the class are displayed.
- Anchors [55] is a technique that uses local approximations, too. However, in this case, these approximations are not obtained in a linear way, but use if–then rules obtained through a search process. Thus, an anchor is an if–then rule, so that, if the conditions established by the anchor are met, it is very likely that the prediction will remain stable even if other attributes vary. In this way, anchors provide a local explanation for a prediction, revealing the conditions under which the prediction is most likely to be repeated. In the image recognition tasks, it shows the fragment of the image that conditions that prediction (if that fragment appears on a different picture, it is very likely that the predicted class does not vary).
- The SHAP method [56] allows, given an instance, to know which of its attributes have contributed to classifying it, as well as which ones have reduced the possibility of being of that class. To do this, SHAP uses, in combination with a linear approximation, Shapley values [57], which calculate the importance of each attribute of a prediction by making variations of different combinations of attributes and weighting the impact of the modification in the predicted class. In this sense, given an image, it points out the pixels that have been relevant for classification, both for good and bad (as a heatmap).
- DeepLIFT [58] is a specific technique for deep neural networks and analyzes the activations of the neurons in order to discover which attributes are more relevant for the classification of an instance. To do this, DeepLIFT uses a neutral reference against which differences in neuron firings are measured. In this way, the difference between the activation of a neuron for both the reference and the instance is computed, and by using backpropagation, the contribution of the different attributes to the final classification is calculated. In the case of image recognition, the pixels ranked as the most important for a specific prediction can be obtained. As the reference image, it is proposed that several images, such as a distorted version of the original image, are considered to see which is more useful for the model.
- Integrated gradients [59] are a concept similar to the DeepLIFT reference image and is called baseline, and it is combined with the use of the gradient operation. Thus, to calculate the importance of the attributes, this method computes the gradient of all the points between the baseline and the input, and they are accumulated in order to obtain the integrated gradients. In this case, the baseline proposed for image recognition is an “empty” image, black in color, but it has the problem that on dark images it can reduce the attribution of dark pixels.
- XRAI [60] is an improvement over the integrated gradients because, although it starts from the same idea (using baselines, approximating the importance through integrated gradients), some elements are used that improve its accuracy and understandability. The main improvement is that XRAI divides the image into regions, and the region’s importance is then calculated using the integrated gradients. These regions can be of multiple sizes, and they are able to approximate very well the shapes of the images. This way, it is much easier for the users to visualize the parts of the image that have been relevant for the classification, since they can observe the most important sets of pixels, instead of individual pixels with empty spaces between them. In addition, to avoid the black image problem that the integrated gradients had, XRAI uses two baselines, one black and one white. The user can also configure the percentage of regions to be displayed (the top 10%, the top 20%...), so that the method can be adjusted to the characteristics of the model. For these reasons, since XRAI is the method that offers the best balance between precision and understandability, we decided to apply XRAI to achieve the goal of this paper.
3. Application of XRAI in Driver Modeling
3.1. Target ADAS
3.1.1. Emotions Model
3.1.2. Activity Model
3.2. Applied XAI Technique: XRAI
4. Experimental Setup and Results
4.1. Datasets
4.1.1. Datasets Previously Used for Training
- The Karolinska Directed Emotional Faces (KDEF): used to test the emotions model;
- Multimodal Multiview and Multispectral Driver Action Dataset (3MDAD): used to test the activity model.
4.1.2. New Datasets
4.2. Results and Discussion
4.2.1. Emotions Model’s Results
4.2.2. Activity Model’s Results
5. Conclusions and Future Work
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
Appendix A. Extended Results: Confusion Matrices
References
- Doshi-Velez, F.; Kim, B. Towards A Rigorous Science of Interpretable Machine Learning. arXiv 2017, arXiv:1702.08608. [Google Scholar]
- Guidotti, R.; Monreale, A.; Ruggieri, S.; Turini, F.; Giannotti, F.; Pedreschi, D. A Survey of Methods for Explaining Black Box Models. ACM Comput. Surv. 2018, 51. [Google Scholar] [CrossRef] [Green Version]
- Rossi, F. Building trust in artificial intelligence. J. Int. Aff. 2018, 72, 127–134. [Google Scholar]
- Arrieta, A.B.; Díaz-Rodríguez, N.; Del Ser, J.; Bennetot, A.; Tabik, S.; Barbado, A.; García, S.; Gil-López, S.; Molina, D.; Benjamins, R.; et al. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 2020, 58, 82–115. [Google Scholar] [CrossRef] [Green Version]
- Sipele, O.; Zamora, V.; Ledezma, A.; Sanchis, A. Advanced Driver’s Alarms System through Multi-agent Paradigm. In Proceedings of the 2018 3rd IEEE International Conference on Intelligent Transportation Engineering (ICITE), Singapore, 3–5 September 2018; pp. 269–275. [Google Scholar]
- Hasenjäger, M.; Heckmann, M.; Wersing, H. A Survey of Personalization for Advanced Driver Assistance Systems. IEEE Trans. Intell. Veh. 2020, 5, 335–344. [Google Scholar] [CrossRef]
- Alvarez, L. Driver Emotion and behavior Recognition Using Deep Learning. Universidad Carlos III de Madrid, 2020. Available online: https://e-archivo.uc3m.es/handle/10016/32287 (accessed on 7 April 2021).
- Dickmanns, E.D.; Mysliwetz, B.D. Recursive 3-D road and relative ego-state recognition. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 14, 199–213. [Google Scholar] [CrossRef]
- Viktorová, L.; Sucha, M. Learning about advanced driver assistance systems—The case of ACC and FCW in a sample of Czech drivers. Transp. Res. Part Traffic Psychol. Behav. 2019, 65, 576–583. [Google Scholar] [CrossRef]
- Singh, S. Critical Reasons for Crashes Investigated in the National Motor Vehicle Crash Causation Survey; US Department of Transportation NHTSAN: Washington, DC, USA, 2015.
- Khan, M.Q.; Lee, S. Gaze and Eye Tracking: Techniques and Applications in ADAS. Sensors 2019, 19, 5540. [Google Scholar] [CrossRef] [Green Version]
- Iglesias, J.A.; Ledezma, A.; Sanchis, A. Evolving classification of UNIX users’ behaviors. Evol. Syst. 2014, 5, 231–238. [Google Scholar] [CrossRef]
- Ozguner, U.; Acarman, T.; Redmill, K.A. Autonomous Ground Vehicles; Artech House: Norwood, MA, USA, 2011. [Google Scholar]
- Abu Ali, N.; Abou-zeid, H. Driver Behavior Modeling: Developments and Future Directions. Int. J. Veh. Technol. 2016, 2016, 1–12. [Google Scholar] [CrossRef] [Green Version]
- Andonovski, G.; Sipele, O.; Iglesias, J.A.; Sanchis, A.; Lughofer, E.; Skrjanc, I. Detection of driver maneuvers using evolving fuzzy cloud-based system. In Proceedings of the 2020 IEEE Symposium Series on Computational Intelligence, SSCI2020, Canberra, Australia, 1–4 December 2020; pp. 700–706. [Google Scholar]
- Sawade, O.; Schulze, M.; Radusch, I. Robust Communication for Cooperative Driving Maneuvers. IEEE Intell. Transp. Syst. Mag. 2018, 10, 159–169. [Google Scholar] [CrossRef]
- Skrjanc, I.; Andonovski, G.; Ledezma, A.; Sipele, O.; Iglesias, J.A.; Sanchis, A. Evolving cloud-based system for the recognition of drivers’ actions. Expert Syst. Appl. 2018, 99, 231–238. [Google Scholar] [CrossRef]
- Liu, S.; Zheng, K.; Zhao, L.; Fan, P. A driving intention prediction method based on hidden Markov model for autonomous driving. Comput. Commun. 2020, 157. [Google Scholar] [CrossRef] [Green Version]
- Mühlbacher-Karrer, S.; Mosa, A.; Faller, L.; Ali, M.; Hamid, R.; Zangl, H.; Kyamakya, K. A Driver State Detection System—Combining a Capacitive Hand Detection Sensor with Physiological Sensors. IEEE Trans. Instrum. Meas. 2017, 66, 624–636. [Google Scholar] [CrossRef]
- Cristea, R.; Rulewitz, S.; Radusch, I.; Hübner, K.; Schünemann, B. Implementation of Cognitive Driver Models in Microscopic Traffic Simulations. In Proceedings of the 9th EAI International Conference on Simulation Tools and Techniques, Prague, Czech Republic, 22–23 August 2016. [Google Scholar]
- Valeriano, L.C.; Napoletano, P.; Schettini, R. Recognition of driver distractions using deep learning. In Proceedings of the 2018 IEEE 8th International Conference on Consumer Electronics, Berlin, Germany, 2–5 September 2018; pp. 1–6. [Google Scholar]
- Masood, S.; Rai, A.; Aggarwal, A.; Doja, M.N.; Ahmad, M. Detecting distraction of drivers using convolutional neural network. Pattern Recognit. Lett. 2018, 139, 79–85. [Google Scholar] [CrossRef]
- Yun, Y.; Gu, I.Y.; Bolbat, M.; Khan, Z.H. Video-based detection and analysis of driver distraction and inattention. In Proceedings of the 2014 International Conference on Signal Processing and Integrated Networks (SPIN), Amityville, NY, USA, 20–21 February 2014; pp. 190–195. [Google Scholar]
- Rangesh, A.; Trivedi, M.M. Handynet: A one-stop solution to detect, segment, localize & analyze driver hands. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Salt Lake City, UT, USA, 18–22 June 2018; pp. 1103–1110. [Google Scholar]
- Mu, Z.; Hu, J.; Yin, J. Driving fatigue detecting based on EEG signals of forehead area. Int. J. Pattern Recognit. Artif. Intell. 2017, 31, 1750011. [Google Scholar] [CrossRef]
- Frijda, N.H.; Manstead, A.S.; Bem, S. The influence of emotions on beliefs. In Emotions and Beliefs: How Feelings Influence Thoughts; Cambridge University Press: New York, NY, USA, 2000; pp. 1–9. [Google Scholar]
- Ekman, P. An argument for basic emotions. Cogn. Emot. 1992, 6, 169–200. [Google Scholar] [CrossRef]
- Ekman, R. What the Face Reveals: Basic and Applied Studies of Spontaneous Expression Using the Facial Action Coding System (FACS); Oxford University Press: New York, NY, USA, 1997. [Google Scholar]
- Sini, J.; Marceddu, A.C.; Violante, M. Automatic Emotion Recognition for the Calibration of Autonomous Driving Functions. Electronics 2020, 9, 518. [Google Scholar] [CrossRef] [Green Version]
- Tarnowski, P.; Kołodziej, M.; Majkowski, A.; Rak, R.J. Emotion recognition using facial expressions. Procedia Comput. Sci. 2017, 108, 1175–1184. [Google Scholar] [CrossRef]
- Alarcão, S.M.; Fonseca, M.J. Emotions Recognition Using EEG Signals: A Survey. IEEE Trans. Affect. Comput. 2019, 10, 374–393. [Google Scholar] [CrossRef]
- Sheykhivand, S.; Mousavi, Z.; Rezaii, T.Y.; Farzamnia, A. Recognizing Emotions Evoked by Music Using CNN-LSTM Networks on EEG Signals. IEEE Access 2020, 8, 139332–139345. [Google Scholar] [CrossRef]
- Emerich, S.; Lupu, E.; Apatean, A. Emotions recognition by speechand facial expressions analysis. In Proceedings of the 2009 17th European Signal Processing Conference, Scotland, UK, 24–28 August 2009; pp. 1617–1621. [Google Scholar]
- Hua, W.; Dai, F.; Huang, L.; Xiong, J.; Gui, G. HERO: Human Emotions Recognition for Realizing Intelligent Internet of Things. IEEE Access 2019, 7, 24321–24332. [Google Scholar] [CrossRef]
- Eyben, F.; Wöllmer, M.; Poitschke, T.; Schuller, B.; Blaschke, C.; Färber, B.; Nguyen-Thien, N. Emotion on the Road: Necessity, Acceptance, and Feasibility of Affective Computing in the Car. Adv. Hum. Comput. Int. 2010, 2010. [Google Scholar] [CrossRef] [Green Version]
- Izard, C.E. Emotion Theory and Research: Highlights, Unanswered Questions, and Emerging Issues. Annu. Rev. Psychol. 2009, 60, 1–25. [Google Scholar] [CrossRef] [Green Version]
- Steinhauser, K.; Leist, F.; Maier, K.; Michel, V.; Pärsch, N.; Rigley, P.; Wurm, F.; Steinhauser, M. Effects of emotions on driving behavior. Transp. Res. Part Traffic Psychol. Behav. 2018, 59, 150–163. [Google Scholar] [CrossRef]
- Verma, B.; Choudhary, A. A Framework for Driver Emotion Recognition using Deep Learning and Grassmann Manifolds. In Proceedings of the 21st International Conference on Intelligent Transportation Systems (ITSC), Maui, HI, USA, 4–7 November 2018; pp. 1421–1426. [Google Scholar]
- Ali, M.; Machot, F.A.; Mosa, A.H.; Kyamakya, K. CNN Based Subject-Independent Driver Emotion Recognition System Involving Physiological Signals for ADAS. In Advanced Microsystems for Automotive Applications 2016; Schulze, T., Müller, B., Meyer, G., Eds.; Springer: New York, NY, USA, 2016; pp. 125–138. [Google Scholar]
- Zepf, S.; Hernandez, J.; Schmitt, A.; Minker, W.; Picard, R.W. Driver Emotion Recognition for Intelligent Vehicles: A Survey. ACM Comput. Surv. 2020, 53. [Google Scholar] [CrossRef]
- Siau, K.; Wang, W. Building trust in artificial intelligence, machine learning, and robotics. Cut. Bus. Technol. J. 2018, 31, 47–53. [Google Scholar]
- Cutillo, C.M.; Sharma, K.R.; Foschini, L.; Kundu, S.; Mackintosh, M.; Mandl, K.D. Machine intelligence in healthcare—Perspectives on trustworthiness, explainability, usability, and transparency. NPJ Digit. Med. 2020, 3, 1–5. [Google Scholar] [CrossRef] [Green Version]
- Phillips, P.J.; Hahn, C.; Fontana, P.; Broniatowski, D.; Przybocki, M. Four Principles of Explainable Artificial Intelligence; US Department of Commerce: Washington, DC, USA, 2020.
- Turek, M. Explainable artificial intelligence (XAI); Defense Advanced Research Projects Agency: Arlington County, VA, USA, 2018; Available online: https://www.darpa.mil/program/explainable-artificial-intelligence (accessed on 25 March 2021).
- Villata, S.; Boella, G.; Gabbay, D.M.; Van Der Torre, L. A socio-cognitive model of trust using argumentation theory. Int. J. Approx. Reason. 2013, 54, 541–559. [Google Scholar] [CrossRef]
- Madumal, P.; Miller, T.; Sonenberg, L.; Vetere, F. A grounded interaction protocol for explainable artificial intelligence. arXiv 2019, arXiv:1903.02409. [Google Scholar]
- Nowak, T.; Nowicki, M.R.; Ćwian, K.; Skrzypczyński, P. How to improve object detection in a driver assistance system applying explainable deep learning. In Proceedings of the 2019 IEEE Intelligent Vehicles Symposium (IV), Paris, France, 9–12 June 2019; pp. 226–231. [Google Scholar]
- Sun, L.; Zhan, W.; Hu, Y.; Tomizuka, M. Interpretable modelling of driving behaviors in interactive driving scenarios based on cumulative prospect theory. In Proceedings of the 2019 IEEE Intelligent Transportation Systems Conference (ITSC), Auckland, New Zealand, 27–30 October 2019; pp. 4329–4335. [Google Scholar]
- Tversky, A.; Kahneman, D. Advances in Prospect Theory: Cumulative Representation of Uncertainty. J. Risk Uncertain. 1992, 5, 297–323. [Google Scholar] [CrossRef]
- Ng, A.Y.; Russell, S. Algorithms for Inverse Reinforcement Learning. In Proceedings of the 17th International Conference on Machine Learning, Austin, TX, USA, 21–23 June 2000; pp. 663–670. [Google Scholar]
- Ripley, B.D. Pattern Recognition and Neural Networks; Cambridge University Press: New York, NY, USA, 1996. [Google Scholar]
- Kim, J. Explainable and Advisable Learning for Self-Driving Vehicles; eScholarship, University of California: San Francisco, CA, USA, 2019. [Google Scholar]
- Kim, J.; Rohrbach, A.; Darrell, T.; Canny, J.; Akata, Z. Textual Explanations for Self-Driving Vehicles. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018. [Google Scholar]
- Ribeiro, M.T.; Singh, S.; Guestrin, C. “Why Should I Trust You?”: Explaining the Predictions of Any Classifier. arXiv 2016, arXiv:1602.04938. [Google Scholar]
- Ribeiro, M.T.; Singh, S.; Guestrin, C. Anchors: High-Precision Model-Agnostic Explanations. In Human-AI Collaboration, Proceedings of the AAAI Conference on Artificial Intelligence, New Orleans, LA, USA, 2–7 February 2018; McIlraith, S.A., Weinberger, K.Q., Eds.; AAAI Press: Menlo Park, CA, USA, 2018; pp. 1527–1535. [Google Scholar]
- Lundberg, S.; Lee, S.I. A Unified Approach to Interpreting Model Predictions. arXiv 2017, arXiv:1705.07874. [Google Scholar]
- Aumann, R.; Shapley, L. Values of non-atomic games. Bull. Am. Math. Soc. 1975, 81, 539–546. [Google Scholar]
- Shrikumar, A.; Greenside, P.; Kundaje, A. Learning Important Features Through Propagating Activation Differences. arXiv 2019, arXiv:1704.02685. [Google Scholar]
- Sundararajan, M.; Taly, A.; Yan, Q. Axiomatic Attribution for Deep Networks. arXiv 2017, arXiv:1703.01365. [Google Scholar]
- Kapishnikov, A.; Bolukbasi, T.; Viégas, F.; Terry, M. XRAI: Better Attributions Through Regions. arXiv 2019, arXiv:1906.02825. [Google Scholar]
- Kanade, T.; Cohn, J.F.; Tian, Y. Comprehensive database for facial expression analysis. In Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580), Grenoble, France, 26–30 March 2000; pp. 46–53. [Google Scholar]
- Lucey, P.; Cohn, J.F.; Kanade, T.; Saragih, J.; Ambadar, Z.; Matthews, I. The Extended Cohn-Kanade Dataset (CK+): A complete dataset for action unit and emotion-specified expression. In Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 13–18 June 2010; pp. 94–101. [Google Scholar]
- Carrier, P.L.; Courville, A. Challenges in Representation Learning: Facial Expression Recognition Challenge, Version 1. 2013. Available online: https://www.kaggle.com/c/challenges-in-representation-learning-facial-expression-recognition-challenge/data (accessed on 25 March 2021).
- Lyons, M.; Kamachi, M.; Gyoba, J. The Japanese Female Facial Expression (JAFFE) Dataset. Available online: https://doi.org/10.5281/zenodo.3451524 (accessed on 7 April 2021).
- Lyons, M.; Akamatsu, S.; Kamachi, M.; Gyoba, J. Coding facial expressions with Gabor wavelets. In Proceedings of the 3rd IEEE International Conference on Automatic Face and Gesture Recognition, The Woodlands, TX, USA, 21–23 June 1998; pp. 200–205. [Google Scholar]
- Lundqvist, D.; Flykt, A.; Öhman, A. The Karolinska Directed Emotional Faces-KDEF, CD ROM from Department of Clinical Neuroscience, Psychology Section; Karolinska Institutet: Solna, Switzerland, 1998. [Google Scholar]
- Jegham, I.; Ben Khalifa, A.; Alouani, I.; Mahjoub, M.A. MDAD: A Multimodal and Multiview in-Vehicle Driver Action Dataset. In Computer Analysis of Images and Patterns; Vento, M., Percannella, G., Eds.; Springer International Publishing: Cham, Switzerland, 2019; pp. 518–529. [Google Scholar]
- Jegham, I.; Ben Khalifa, A.; Alouani, I.; Mahjoub, M.A. A novel public dataset for multimodal multiview and multispectral driver distraction analysis: 3MDAD. Signal Process. Image Commun. 2020, 88, 115960. [Google Scholar] [CrossRef]
- Felzenszwalb, P.; Huttenlocher, D. Efficient Graph-Based Image Segmentation. Int. J. Comput. Vis. 2004, 59, 167–181. [Google Scholar] [CrossRef]
- Langner, O.; Dotsch, R.; Bijlstra, G.; Wigboldus, D.H.J.; Hawk, S.T.; van Knippenberg, A. Presentation and validation of the Radboud Faces Database. Cogn. Emot. 2010, 24, 1377–1388. [Google Scholar] [CrossRef]
- State Farm. State Farm Distracted Driver Detection. 2016. Available online: https://www.kaggle.com/c/state-farm-distracted-driver-detection/data (accessed on 3 October 2020).
- Abouelnaga, Y.; Eraqi, H.M.; Moustafa, M.N. Real-time Distracted Driver Posture Classification. arXiv 2017, arXiv:1706.09498. [Google Scholar]
- Eraqi, H.M.; Abouelnaga, Y.; Saad, M.H.; Moustafa, M.N. Driver Distraction Identification with an Ensemble of Convolutional Neural Networks. J. Adv. Transp. 2019, 2019. [Google Scholar] [CrossRef]
- Xing, Y.; Lv, C.; Wang, H.; Cao, D.; Velenis, E.; Wang, F. Driver Activity Recognition for Intelligent Vehicles: A Deep Learning Approach. IEEE Trans. Veh. Technol. 2019, 68, 5379–5390. [Google Scholar] [CrossRef] [Green Version]
Dataset’s Name | No. of Subjects | No. of Samples | Image Size | Specific Information |
---|---|---|---|---|
The Extended Cohn-Kanade Dataset (CK+) [61,62] | 123 | +10.000 (593 different sequences) | 640 × 490 pixels | Diverse dataset: it includes men and women of different races |
Facial Expression Recognition Challenge: FER (2013) [63] | Undefined | +35.000 | 48 × 48 pixels | It was created by searching in Google the target emotions. A small percentage of the images is mislabelled or does not represent a person |
The japanese facial expression (JAFFE) [64,65] | 10 | 219 | 256 × 256 pixels | The subjects are all Japanese women |
The Karolinska Directed Emotional Faces (KDEF) [66] | 70 | 4900 | 562 × 762 pixels | 50% men, 50% women. Photographs taken from 5 different angles |
Dataset’s Name | No. of Subjects | No. of Samples | Image Size | Specific Information |
---|---|---|---|---|
Multimodal Multiview and Multispectral Driver Action Dataset (3MDAD) [67,68] | 50 | +110.000 (16 different activities) | 640 × 480 pixels | Sequences of images that show every frame between the beginning and end of an action. Some frames do not represent the activity they are labelled as. |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Lorente, M.P.S.; Lopez, E.M.; Florez, L.A.; Espino, A.L.; Martínez, J.A.I.; de Miguel, A.S. Explaining Deep Learning-Based Driver Models. Appl. Sci. 2021, 11, 3321. https://doi.org/10.3390/app11083321
Lorente MPS, Lopez EM, Florez LA, Espino AL, Martínez JAI, de Miguel AS. Explaining Deep Learning-Based Driver Models. Applied Sciences. 2021; 11(8):3321. https://doi.org/10.3390/app11083321
Chicago/Turabian StyleLorente, Maria Paz Sesmero, Elena Magán Lopez, Laura Alvarez Florez, Agapito Ledezma Espino, José Antonio Iglesias Martínez, and Araceli Sanchis de Miguel. 2021. "Explaining Deep Learning-Based Driver Models" Applied Sciences 11, no. 8: 3321. https://doi.org/10.3390/app11083321
APA StyleLorente, M. P. S., Lopez, E. M., Florez, L. A., Espino, A. L., Martínez, J. A. I., & de Miguel, A. S. (2021). Explaining Deep Learning-Based Driver Models. Applied Sciences, 11(8), 3321. https://doi.org/10.3390/app11083321