Development of an In-Vehicle Intrusion Detection Model Integrating Federated Learning and LSTM Networks
Abstract
:1. Introduction
- -
- Implementation of federated learning, allowing multiple vehicles to collaborate in model training without sharing sensitive data, which mitigates the risks associated with data centralisation.
- -
- Extension of the detection system, increasing the number of identifiable attacks from five (as in [33]) to eight, which significantly enhances the robustness and scope of protection.
- -
- Comparative evaluation between different types of architectures, including Multilayer Perceptrons (MLPs), Long Short-Term Memory (LSTM) networks, and Gated Recurrent Units (GRUs), leveraging the time series nature of vehicular communication data.
- -
- Study of the effect of implementing differential privacy techniques with different types of noise (Gaussian and Laplacian) on the performance and accuracy of the model.
2. Materials and Methods
2.1. Federated Learning Approach
2.2. Data Preprocessing
- -
- Type: type of message (e.g., GPS, BSM).
- -
- SendTime: time the message was sent.
- -
- Sender: identifier of the message sender.
- -
- SenderPseudo: pseudonym of the sender.
- -
- MessageID: unique identifier of the message.
- -
- Class: number indicating the class of the message (e.g., normal, constant offset position, etc).
- -
- Posx, posy, posz: X, Y, and Z coordinates of the vehicle’s position.
- -
- , , : noise associated with the X, Y, and Z coordinates of the vehicle’s position.
- -
- Spdx, spdy, spdz: speed in the X, Y, and Z axis.
- -
- , , : noise associated with the speed in the X, Y, and Z axes.
- -
- Aclx, acly, aclz: acceleration in the X, Y, and Z axes.
- -
- , , : noise associated with the acceleration in the X, Y, and Z axes.
- -
- Hedx, hedy, hedz: heading direction in the X, Y, and Z axes.
- -
- , , : noise associated with the heading direction in the X axis.
- -
- When putting the model into production on data from different vehicles, the ID numbering sequences do not necessarily match those used for training, which could cause serious generalisation issues.
- -
- In the dataset, the messages sent by a vehicle correspond only to one type of attack, so all messages with the same senderID (or pseudoID) also correspond only to one type of attack, as the values of these columns are unique for each vehicle. Therefore, if these variables are not removed, the model could “memorise” which IDs correspond to each type of attack in the dataset, again causing serious generalisation issues.
- -
- The first type, in addition to the variables that were not eliminated (positions and speeds on the X and Y axes), contains a new variable for each original variable with a lag of 1. That is, variables with the values of the previous message for each original variable were added.
- -
- The second type of dataset includes two new variables for each non-eliminated variable, corresponding to the values of the two previous messages (lags of 1 and 2).
- -
- The third type of dataset was constructed by adding five new variables for each variable, corresponding to the values of the five previous messages (lags of 1 to 5).
2.3. Models
2.3.1. MLP
2.3.2. GRU
2.3.3. LSTM
2.4. Differential Privacy
3. Results and Discussion
3.1. Federated Learning Implementation and Particularities
- Server initialisation: The central server starts the training session, setting all the hyperparameters and preparing the global model. In this case, all experiments run 25 training rounds, with 5 local training epochs in each round, and a batch size of 64.
- Client selection and notification: The server selects a subset of clients (in this case, all clients are selected) to participate in the current training round.
- Distribution of the global model: The server sends the selected clients from the previous step a copy of the current global model weights.
- Splitting local data into train and test: Each client reserves 20% of their local data for a later test phase. This is only performed in the first round, maintaining the split for the rest of the rounds.
- Local model training: After splitting the data, each client trains their copy of the global model on their local train set. This training is performed for five epochs in each round. It is worth noting that, to prevent each local model from overfitting to the peculiarities of each client’s local data, dropout layers have been introduced in all trained model architectures. This ensures proper generalisation capacity of the global model.
- Aggregation of local model weights: After local training, clients send the local model weights to the server, where aggregation is performed using the Federated Average (FedAvg) strategy, which averages all local weights to update the global model weights. To ensure convergence, the global model weights are only updated if the average accuracy of all clients has improved compared to the previous round.
- Local evaluation of the global model: Once all local weights are aggregated and set as the new global model weights, they are sent back to the clients, who perform a local evaluation of the current global model on the local test data reserved in the first training round.
- Global model evaluation: After the local evaluation, the server performs its own evaluation of the global model on its own test set, which was extracted during the data preprocessing phase, explained in Section 2.2.
- -
- Global evaluation: The global model is evaluated on the server on its own test data, extracted according the explanation given in Section 2.2. These data are completely independent from the clients’ data, ensuring the global model is tested on data entirely different from those used in training.
- -
- Local or decentralised evaluation: The performance of the global model is also evaluated on the clients’ test sets (each test set constitutes 20% of each client’s local data), which are never seen during training, but are likely more similar to the train data.
- -
- CPU: AMD Ryzen 7 8700 G with Radeon 780 M Graphics (8 cores, 16 threads). Frequency: 4.20 GHz.
- -
- RAM: 32 GB.
- -
- No GPU available.
3.2. MLP
3.3. GRU
3.4. LSTM
3.5. Differential Privacy
3.6. Discussion
4. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Dhull, P.; Guevara, A.P.; Ansari, M.; Pollin, S.; Shariati, N.; Schreurs, D. Internet of Things networks: Enabling simultaneous wireless information and power transfer. IEEE Microw. Mag. 2022, 23, 39–54. [Google Scholar] [CrossRef]
- Rossini, R.; Lopez, L. Towards an European Open Continuum Reference Stack and Architecture. In Proceedings of the 2024 9th International Conference on Smart and Sustainable Technologies (SpliTech), Bol and Split, Croatia, 25–28 June 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 1–5. [Google Scholar] [CrossRef]
- Griffith, D. Innovation at the edge: IoT 2.0. In Proceedings of the 2022 IEEE Asian Solid-State Circuits Conference (A-SSCC), Taipei, Taiwan, 6–9 November 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 2–3. [Google Scholar] [CrossRef]
- Chandy, A. A review on iot based medical imaging technology for healthcare applications. J. Innov. Image Process. (JIIP) 2019, 1, 51–60. [Google Scholar] [CrossRef]
- Sadoughi, F.; Behmanesh, A.; Sayfouri, N. Internet of things in medicine: A systematic mapping study. J. Biomed. Inform. 2020, 103, 103383. [Google Scholar] [CrossRef]
- Ranjan, R.; Sahana, B.C. A Comprehensive Roadmap for Transforming Healthcare from Hospital-Centric to Patient-Centric Through Healthcare Internet of Things (IoT). Eng. Sci. 2024, 30, 1175. [Google Scholar] [CrossRef]
- Abdulmalek, S.; Nasir, A.; Jabbar, W.A.; Almuhaya, M.A.; Bairagi, A.K.; Khan, M.A.M.; Kee, S.H. IoT-based healthcare-monitoring system towards improving quality of life: A review. Healthcare 2022, 10, 1993. [Google Scholar] [CrossRef] [PubMed]
- Javaid, M.; Haleem, A.; Singh, R.P.; Rab, S.; Suman, R. Upgrading the manufacturing sector via applications of Industrial Internet of Things (IIoT). Sensors Int. 2021, 2, 100129. [Google Scholar] [CrossRef]
- Lampropoulos, G.; Siakas, K.; Anastasiadis, T. Internet of things in the context of industry 4.0: An overview. Int. J. Entrep. Knowl. 2019, 7, 4–19. [Google Scholar] [CrossRef]
- Li, Q.; Yang, Y.; Jiang, P. Remote monitoring and maintenance for equipment and production lines on industrial internet: A literature review. Machines 2022, 11, 12. [Google Scholar] [CrossRef]
- Martínez, M.Z.; Silveira, L.H.M.D.; Marin-Perez, R.; Gomez, A.F.S. Development of a Neural Network System for Predicting Topsoil Moisture Using Remote Sensing and Rainfall Forecast Data. In Proceedings of the 2024 4th International Conference on Embedded & Distributed Systems (EDiS), Bechar, Algeria, 3–5 November 2024; pp. 249–254. [Google Scholar] [CrossRef]
- Zambudio Martínez, M.; Silveira, L.H.M.d.; Marin-Perez, R.; Gomez, A.F.S. Development and Comparison of Artificial Neural Networks and Gradient Boosting Regressors for Predicting Topsoil Moisture Using Forecast Data. AI 2025, 6, 41. [Google Scholar] [CrossRef]
- Placidi, P.; Morbidelli, R.; Fortunati, D.; Papini, N.; Gobbi, F.; Scorzoni, A. Monitoring Soil and Ambient Parameters in the IoT Precision Agriculture Scenario: An Original Modeling Approach Dedicated to Low-Cost Soil Water Content Sensors. Sensors 2021, 21, 5110. [Google Scholar] [CrossRef]
- Ananthi, N.; Divya, J.; Divya, M.; Janani, V. IoT based smart soil monitoring system for agricultural production. In Proceedings of the 2017 IEEE Technological Innovations in ICT for Agriculture and Rural Development (TIAR), Chennai, India, 7–8 April 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 209–214. [Google Scholar] [CrossRef]
- Ghosh, R.K.; Banerjee, A.; Aich, P.; Basu, D.; Ghosh, U. Intelligent IoT for automotive industry 4.0: Challenges, opportunities, and future trends. In Intelligent Internet of Things for Healthcare and Industry; Springer: Berlin/Heidelberg, Germany, 2022; pp. 327–352. [Google Scholar] [CrossRef]
- Krasniqi, X.; Hajrizi, E. Use of IoT technology to drive the automotive industry from connected to full autonomous vehicles. IFAC-PapersOnLine 2016, 49, 269–274. [Google Scholar] [CrossRef]
- Pourrahmani, H.; Yavarinasab, A.; Zahedi, R.; Gharehghani, A.; Mohammadi, M.H.; Bastani, P. The applications of Internet of Things in the automotive industry: A review of the batteries, fuel cells, and engines. Internet Things 2022, 19, 100579. [Google Scholar] [CrossRef]
- Khayyam, H.; Javadi, B.; Jalili, M.; Jazar, R.N. Artificial intelligence and internet of things for autonomous vehicles. In Nonlinear Approaches in Engineering Applications: Automotive Applications of Engineering Problems; Springer International Publishing: Cham, Germany, 2020; pp. 39–68. [Google Scholar] [CrossRef]
- Biswas, A.; Wang, H.C. Autonomous Vehicles Enabled by the Integration of IoT, Edge Intelligence, 5G, and Blockchain. Sensors 2023, 23, 1963. [Google Scholar] [CrossRef] [PubMed]
- Khattak, Z.H.; Smith, B.L.; Fontaine, M.D. Cyberattack Monitoring Architectures for Resilient Operation of Connected and Automated Vehicles. IEEE Open J. Intell. Transp. Syst. 2024, 5, 322–341. [Google Scholar] [CrossRef]
- Talpur, A.; Gurusamy, M. Machine learning for security in vehicular networks: A comprehensive survey. IEEE Commun. Surv. Tutorials 2021, 24, 346–379. [Google Scholar] [CrossRef]
- Demestichas, K.; Alexakis, T.; Peppes, N.; Adamopoulou, E. Comparative Analysis of Machine Learning-Based Approaches for Anomaly Detection in Vehicular Data. Vehicles 2021, 3, 171–186. [Google Scholar] [CrossRef]
- Jabbar, R.; Kharbeche, M.; Al-Khalifa, K.; Krichen, M.; Barkaoui, K. Blockchain for the Internet of Vehicles: A Decentralized IoT Solution for Vehicles Communication Using Ethereum. Sensors 2020, 20, 3928. [Google Scholar] [CrossRef]
- Shrestha, R.; Nam, S.Y.; Bajracharya, R.; Kim, S. Evolution of V2X Communication and Integration of Blockchain for Security Enhancements. Electronics 2020, 9, 1338. [Google Scholar] [CrossRef]
- Xun, Y.; Zhao, Y.; Liu, J. VehicleEIDS: A novel external intrusion detection system based on vehicle voltage signals. IEEE Internet Things J. 2022, 9, 2124–2133. [Google Scholar] [CrossRef]
- Tanaka, D.; Yamada, M.; Kashima, H.; Kishikawa, T.; Haga, T.; Sasaki, T. In-vehicle network intrusion detection and explanation using density ratio estimation. In Proceedings of the 2019 IEEE Intelligent Transportation Systems Conference (ITSC), Auckland, New Zealand, 27–30 October 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 2238–2243. [Google Scholar] [CrossRef]
- Wu, W.; Huang, Y.; Kurachi, R.; Zeng, G.; Xie, G.; Li, R.; Li, K. Sliding window optimized information entropy analysis method for intrusion detection on in-vehicle networks. IEEE Access 2018, 6, 45233–45245. [Google Scholar] [CrossRef]
- Al-Jarrah, O.Y.; Maple, C.; Dianati, M.; Oxtoby, D.; Mouzakitis, A. Intrusion detection systems for intra-vehicle networks: A review. IEEE Access 2019, 7, 21266–21289. [Google Scholar] [CrossRef]
- Zhang, L.; Yan, X.; Ma, D. A binarized neural network approach to accelerate in-vehicle network intrusion detection. IEEE Access 2022, 10, 123505–123520. [Google Scholar] [CrossRef]
- Alladi, T.; Kohli, V.; Chamola, V.; Yu, F.R. A deep learning based misbehavior classification scheme for intrusion detection in cooperative intelligent transportation systems. Digit. Commun. Netw. 2023, 9, 1113–1122. [Google Scholar] [CrossRef]
- So, S.; Sharma, P.; Petit, J. Integrating plausibility checks and machine learning for misbehavior detection in VANET. In Proceedings of the 2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA), Orlando, FL, USA, 17–20 December 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 564–571. [Google Scholar] [CrossRef]
- Van Der Heijden, R.W.; Lukaseder, T.; Kargl, F. Veremi: A dataset for comparable evaluation of misbehavior detection in vanets. In Proceedings of the Security and Privacy in Communication Networks: 14th International Conference, SecureComm 2018, Singapore, Singapore, 8–10 August 2018; Proceedings, Part I. Springer: Berlin/Heidelberg, Germany, 2018; pp. 318–337. [Google Scholar] [CrossRef]
- Campos, E.M.; Hernandez-Ramos, J.L.; Vidal, A.G.; Baldini, G.; Skarmeta, A. Misbehavior detection in intelligent transportation systems based on federated learning. Internet Things 2024, 25, 101127. [Google Scholar] [CrossRef]
- Medlin, K.; Leyffer, S.; Raghavan, K. A Bilevel Optimization Framework for Imbalanced Data Classification. arXiv 2024. [Google Scholar] [CrossRef]
- Ramsauer, A.; Baumann, P.M.; Lex, C. The Influence of Data Preparation on Outlier Detection in Driveability Data. SN Comput. Sci. 2021, 2, 222. [Google Scholar] [CrossRef]
- Patro, S. Normalization: A Preprocessing Stage. CoRR 2015, abs/1503.06462. [Google Scholar] [CrossRef]
- Izonin, I.; Tkachenko, R.; Shakhovska, N.; Ilchyshyn, B.; Singh, K.K. A Two-Step Data Normalization Approach for Improving Classification Accuracy in the Medical Diagnosis Domain. Mathematics 2022, 10, 1942. [Google Scholar] [CrossRef]
- Popescu, M.C.; Balas, V.E.; Perescu-Popescu, L.; Mastorakis, N. Multilayer perceptron and neural networks. WSEAS Trans. Circuits Syst. 2009, 8, 579–588. [Google Scholar]
- Murtagh, F. Multilayer perceptrons for classification and regression. Neurocomputing 1991, 2, 183–197. [Google Scholar] [CrossRef]
- Pavani, K.; Damodaram, A. Intrusion detection using MLP for MANETs. In Proceedings of the Third International Conference on Computational Intelligence and Information Technology (CIIT 2013). The Institution of Engineering and Technology, Mumbai, India, 18–19 October 2013; pp. 440–444. [Google Scholar] [CrossRef]
- Amato, F.; Mazzocca, N.; Moscato, F.; Vivenzio, E. Multilayer perceptron: An intelligent model for classification and intrusion detection. In Proceedings of the 2017 31st International conference on advanced information networking and applications workshops (WAINA), Taipei, Taiwan, 27–29 March 2017; pp. 686–691. [Google Scholar] [CrossRef]
- Ahmad, I.; Abdullah, A.; Alghamdi, A.; Alnfajan, K.; Hussain, M. Intrusion detection using feature subset selection based on MLP. Sci. Res. Essays 2011, 6, 6804–6810. [Google Scholar] [CrossRef]
- Sanmorino, A.; Setiawan, H.; Coyanda, J.R. The utilization o machine learning for network intrusion detection systems. Inform. Autom. Pomiary Gospod. Ochr. Środowiska 2024, 14, 86–89. [Google Scholar] [CrossRef]
- Dey, R.; Salem, F.M. Gate-variants of gated recurrent unit (GRU) neural networks. In Proceedings of the 2017 IEEE 60th international midwest symposium on circuits and systems (MWSCAS), IEEE, Boston, MA, USA, 6–9 August 2017; pp. 1597–1600. [Google Scholar] [CrossRef]
- Zhang, Y.; Xiong, X.; Xiao, L.; Li, J.; Luo, R.; Zhang, J.; Zhang, H. Intrusion Detection Model Based on Recursive Gated Convolution and Bidirectional Gated Recurrent Units. In Proceedings of the 2024 IEEE International Conference on Smart Internet of Things (SmartIoT), Shenzhen, China, 14–16 November 2024; pp. 433–438. [Google Scholar] [CrossRef]
- Faiq Kamel, F.; Salih Mahdi, M. Intrusion Detection Systems Based on RNN and GRU Models using CSE-CIC-IDS2018 Dataset in AWS Cloud. J. Qadisiyah Comput. Sci. Math. 2024, 16, 141–160. [Google Scholar] [CrossRef]
- Panggabean, C.; Venkatachalam, C.; Shah, P.; John, S.; Devi, P.R.; Venkatachalam, S. Intelligent DoS and DDoS Detection: A Hybrid GRU-NTM Approach to Network Security. In Proceedings of the 2024 5th International Conference on Smart Electronics and Communication (ICOSEC), Trichy, India, 18–20 September 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 658–665. [Google Scholar] [CrossRef]
- Graves, A. Long Short-Term Memory. Supervised Seq. Label. Recurr. Neural Netw. 2012, 385, 37–45. [Google Scholar] [CrossRef]
- Hochreiter, S.; Schmidhuber, J. Long Short-term Memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef]
- Greff, K.; Srivastava, R.K.; Koutník, J.; Steunebrink, B.R.; Schmidhuber, J. LSTM: A Search Space Odyssey. IEEE Trans. Neural Netw. Learn. Syst. 2017, 28, 2222–2232. [Google Scholar] [CrossRef]
- Khan, M.A.; Karim, M.R.; Kim, Y. A Scalable and Hybrid Intrusion Detection System Based on the Convolutional-LSTM Network. Symmetry 2019, 11, 583. [Google Scholar] [CrossRef]
- Laghrissi, F.; Douzi, S.; Douzi, K.; Hssina, B. Intrusion detection systems using long short-term memory (LSTM). J. Big Data 2021, 8, 65. [Google Scholar] [CrossRef]
- Kumar, G.S.; Premalatha, K.; Maheshwari, G.U.; Kanna, P.R.; Vijaya, G.; Nivaashini, M. Differential privacy scheme using Laplace mechanism and statistical method computation in deep neural network for privacy preservation. Eng. Appl. Artif. Intell. 2024, 128, 107399. [Google Scholar] [CrossRef]
- El Ouadrhiri, A.; Abdelhadi, A. Differential privacy for deep and federated learning: A survey. IEEE Access 2022, 10, 22359–22380. [Google Scholar] [CrossRef]
- Sarathy, R.; Muralidhar, K. Evaluating Laplace noise addition to satisfy differential privacy for numeric data. Trans. Data Priv. 2011, 4, 1–17. [Google Scholar]
- Beutel, D.J.; Topal, T.; Mathur, A.; Qiu, X.; Fernandez-Marques, J.; Gao, Y.; Sani, L.; Li, K.H.; Parcollet, T.; de Gusmão, P.P.B.; et al. Flower: A friendly federated learning research framework. CoRR 2020, abs/2007.14390. [Google Scholar] [CrossRef]
- Xia, F.; Cheng, W. A survey on privacy-preserving federated learning against poisoning attacks. Clust. Comput. 2024, 27, 13565–13582. [Google Scholar] [CrossRef]
- Zhang, C.; Yang, S.; Mao, L.; Ning, H. Anomaly detection and defense techniques in federated learning: A comprehensive review. Artif. Intell. Rev. 2024, 57, 150. [Google Scholar] [CrossRef]
Attack | Description | Main Impact |
---|---|---|
Constant Position | Always transmits a fixed location, ignoring actual movement. | Misinterpreted as a stationary object, potentially disrupting tracking and safety measures. |
Constant Position Offset | Reports the true position with a fixed spatial error added. | Consistently inaccurate location information, leading to systematic misestimation of vehicle dynamics. |
Random Position | Sends completely random locations for each message. | Generates extreme outliers that degrade situational awareness and detection reliability. |
Random Position Offset | Applies a variable random error to the true position. | Produces inconsistent positional data, complicating motion prediction and validation. |
Constant Speed | Reports a fixed speed value regardless of actual speed changes. | Distorts dynamic estimations, adversely affecting collision avoidance and trajectory planning. |
Constant Speed Offset | Adds a constant error to the actual speed. | Leads to persistent misreporting of vehicle dynamics, undermining accurate speed-based predictions. |
Random Speed | Transmits completely random speed values independent of true speed. | Introduces high unpredictability, severely impairing the estimation of relative motion. |
Random Speed Offset | Applies a variable random error to the actual speed. | Results in erratic speed reports that yield unreliable dynamic modeling and sensor fusion. |
Attack | Number of Examples |
---|---|
Normal | 1,900,539 |
Constant speed offset | 44,359 |
Random position | 43,857 |
Constant position | 43,653 |
Constant position offset | 43,567 |
Random speed offset | 42,583 |
Random position offset | 42,575 |
Random speed | 42,258 |
Constant speed | 41,925 |
Variable | Shapiro–Wilk p-Value | KS Uniform Statistic | KS Normal Statistic |
---|---|---|---|
posx | 0.0000 | 0.1888 | 0.1635 |
posy | 0.0000 | 0.2613 | 0.0909 |
spdx | 0.0000 | 0.3371 | 0.1021 |
spdy | 0.0000 | 0.3316 | 0.0684 |
Parameter | Value |
---|---|
0.8, 1, 2, 5 | |
Clipping | |
Sensitivity | Difference between weights of the previous and current round |
Parameter | Value |
---|---|
0.8, 1, 2, 5 | |
, | |
Clipping | |
Sensitivity | Difference between weights of the previous and current round |
Normalisation Applied | Number of Lags | Accuracy (%) | Recall (%) | Precision (%) | F1-Score (%) | Time (s) |
---|---|---|---|---|---|---|
Min–Max | 1 | 69.75 | 70.93 | 72.28 | 70.54 | 647.93 |
Min–Max | 2 | 71.99 | 75.56 | 78.75 | 75.44 | 642.49 |
Min–Max | 5 | 76.88 | 81.52 | 83.68 | 81.39 | 649.52 |
Standard | 1 | 88.05 | 83.74 | 85.14 | 83.21 | 676.24 |
Standard | 2 | 90.25 | 92.32 | 92.74 | 92.29 | 682.59 |
Standard | 5 | 85.62 | 91.13 | 91.88 | 91.17 | 636.06 |
Robust | 1 | 86.88 | 79.26 | 79.27 | 77.62 | 654.03 |
Robust | 2 | 88.00 | 92.28 | 92.64 | 92.30 | 649.17 |
Robust | 5 | 84.38 | 90.87 | 91.66 | 90.86 | 661.61 |
Normalisation Applied | Number of Lags | Accuracy (%) | Recall (%) | Precision (%) | F1-Score (%) |
---|---|---|---|---|---|
Min–Max | 1 | 75.25 | 74.41 | 76.44 | 73.84 |
Min–Max | 2 | 79.13 | 77.74 | 79.61 | 77.76 |
Min–Max | 5 | 78.25 | 80.99 | 82.92 | 80.93 |
Standard | 1 | 89.75 | 89.05 | 89.41 | 89.02 |
Standard | 2 | 92.12 | 91.94 | 92.41 | 91.87 |
Standard | 5 | 89.00 | 90.90 | 91.79 | 90.90 |
Robust | 1 | 91.62 | 78.65 | 78.08 | 76.96 |
Robust | 2 | 93.12 | 92.51 | 92.92 | 92.51 |
Robust | 5 | 90.38 | 90.85 | 91.60 | 90.82 |
Normalisation Applied | Number of Lags | Accuracy (%) | Recall (%) | Precision (%) | F1-Score (%) | Time (s) |
---|---|---|---|---|---|---|
Min–Max | 1 | 76.88 | 79.15 | 80.48 | 79.16 | 496.37 |
Min–Max | 2 | 79.13 | 76.00 | 79.67 | 75.59 | 501.87 |
Min–Max | 5 | 80.25 | 81.42 | 82.81 | 81.54 | 505.93 |
Standard | 1 | 90.37 | 87.81 | 88.80 | 87.55 | 505.95 |
Standard | 2 | 89.50 | 88.75 | 89.51 | 88.74 | 508.35 |
Standard | 5 | 90.12 | 90.83 | 91.72 | 90.81 | 502.01 |
Robust | 1 | 84.25 | 86.08 | 87.25 | 85.86 | 503.41 |
Robust | 2 | 88.88 | 90.81 | 91.54 | 90.73 | 490.67 |
Robust | 5 | 88.50 | 89.88 | 90.68 | 89.85 | 510.12 |
Normalisation Applied | Number of Lags | Accuracy (%) | Recall (%) | Precision (%) | F1-Score (%) |
---|---|---|---|---|---|
Min–Max | 1 | 80.50 | 78.56 | 79.81 | 78.50 |
Min–Max | 2 | 82.37 | 74.60 | 79.60 | 74.29 |
Min–Max | 5 | 80.13 | 81.00 | 82.23 | 81.08 |
Standard | 1 | 92.63 | 88.50 | 89.41 | 88.25 |
Standard | 2 | 94.63 | 89.79 | 90.34 | 89.78 |
Standard | 5 | 93.75 | 92.89 | 93.38 | 92.93 |
Robust | 1 | 91.63 | 85.69 | 87.24 | 85.28 |
Robust | 2 | 93.88 | 91.43 | 92.08 | 91.34 |
Robust | 5 | 93.50 | 91.44 | 92.00 | 91.47 |
Number | Attack |
---|---|
0 | Normal |
1 | Constant position |
2 | Constant position offset |
3 | Random position |
4 | Random position offset |
5 | Constant speed |
6 | Constant speed offset |
7 | Random speed |
8 | Random speed offset |
Normalisation Applied | Number of Lags | Accuracy (%) | Recall (%) | Precision (%) | F1-Score (%) | Time (s) |
---|---|---|---|---|---|---|
Min–Max | 1 | 89.25 | 91.21 | 91.82 | 91.26 | 1407.76 |
Min–Max | 2 | 90.25 | 90.56 | 91.15 | 90.55 | 1375.55 |
Min–Max | 5 | 89.75 | 91.31 | 92.26 | 91.22 | 1346.84 |
Standard | 1 | 92.25 | 95.24 | 95.29 | 95.24 | 1354.92 |
Standard | 2 | 91.62 | 96.43 | 96.51 | 96.44 | 1367.56 |
Standard | 5 | 93.50 | 96.39 | 96.54 | 96.40 | 1364.44 |
Robust | 1 | 90.13 | 95.02 | 95.07 | 95.01 | 1335.29 |
Robust | 2 | 89.50 | 94.84 | 94.92 | 94.86 | 1324.19 |
Robust | 5 | 86.88 | 93.24 | 93.46 | 93.22 | 1309.82 |
Normalisation Applied | Number of Lags | Accuracy (%) | Recall (%) | Precision (%) | F1-Score (%) |
---|---|---|---|---|---|
Min–Max | 1 | 92.75 | 92.66 | 92.94 | 92.71 |
Min–Max | 2 | 93.12 | 93.72 | 93.91 | 93.75 |
Min–Max | 5 | 91.38 | 92.11 | 92.64 | 92.03 |
Standard | 1 | 95.37 | 95.40 | 95.45 | 95.39 |
Standard | 2 | 95.75 | 96.76 | 96.81 | 96.77 |
Standard | 5 | 97.12 | 97.18 | 97.26 | 97.19 |
Robust | 1 | 95.62 | 95.53 | 95.57 | 95.53 |
Robust | 2 | 96.37 | 96.10 | 96.28 | 96.10 |
Robust | 5 | 95.75 | 96.53 | 96.73 | 96.53 |
Normalisation Applied | Number of Lags | Accuracy (%) | Recall (%) | Precision (%) | F1-Score (%) | Time (s) |
---|---|---|---|---|---|---|
Min–Max | 1 | 88.50 | 90.47 | 90.88 | 90.55 | 1125.18 |
Min–Max | 2 | 90.50 | 90.26 | 90.71 | 90.31 | 1272.08 |
Min–Max | 5 | 85.63 | 87.72 | 88.41 | 87.81 | 1236.67 |
Standard | 1 | 91.38 | 94.48 | 94.55 | 94.48 | 1263.53 |
Standard | 2 | 92.75 | 95.34 | 95.41 | 95.35 | 1272.17 |
Standard | 5 | 91.75 | 96.02 | 96.10 | 96.01 | 1231.13 |
Robust | 1 | 89.03 | 94.02 | 94.08 | 94.02 | 1114.88 |
Robust | 2 | 87.12 | 92.96 | 93.20 | 92.96 | 1073.01 |
Robust | 5 | 87.63 | 95.13 | 95.39 | 95.11 | 1087.43 |
Normalisation Applied | Number of Lags | Accuracy (%) | Recall (%) | Precision (%) | F1-Score (%) |
---|---|---|---|---|---|
Min–Max | 1 | 90.62 | 91.45 | 91.67 | 91.50 |
Min–Max | 2 | 92.62 | 91.42 | 91.83 | 91.50 |
Min–Max | 5 | 88.25 | 90.00 | 90.72 | 90.07 |
Standard | 1 | 94.50 | 94.80 | 94.91 | 94.81 |
Standard | 2 | 96.38 | 95.43 | 95.49 | 95.43 |
Standard | 5 | 95.87 | 96.03 | 96.15 | 96.03 |
Robust | 1 | 93.75 | 94.47 | 94.57 | 94.48 |
Robust | 2 | 94.50 | 94.13 | 94.27 | 94.14 |
Robust | 5 | 93.37 | 95.50 | 95.67 | 95.49 |
Normalisation Applied | Number of Lags | Accuracy (%) | Recall (%) | Precision (%) | F1-Score (%) | Time (s) |
---|---|---|---|---|---|---|
Min–Max | 1 | 89.00 | 89.91 | 90.34 | 90.01 | 1213.32 |
Min–Max | 2 | 91.00 | 88.60 | 89.05 | 88.41 | 1238.60 |
Min–Max | 5 | 89.38 | 89.54 | 90.43 | 89.44 | 1254.09 |
Standard | 1 | 91.50 | 94.55 | 94.66 | 94.56 | 1308.27 |
Standard | 2 | 96.75 | 95.69 | 95.77 | 95.70 | 1330.53 |
Standard | 5 | 92.88 | 96.23 | 96.34 | 96.24 | 1275.94 |
Robust | 1 | 91.25 | 93.81 | 93.97 | 93.82 | 1217.99 |
Robust | 2 | 91.63 | 95.81 | 95.89 | 95.80 | 1202.06 |
Robust | 5 | 90.50 | 95.77 | 95.97 | 95.77 | 1174.76 |
Normalisation Applied | Number of Lags | Accuracy (%) | Recall (%) | Precision (%) | F1-Score (%) |
---|---|---|---|---|---|
Min–Max | 1 | 90.75 | 90.25 | 90.45 | 90.30 |
Min–Max | 2 | 91.62 | 91.23 | 91.50 | 91.23 |
Min–Max | 5 | 90.12 | 91.13 | 91.79 | 91.14 |
Standard | 1 | 94.62 | 94.82 | 94.91 | 94.83 |
Standard | 2 | 96.93 | 95.96 | 96.01 | 95.97 |
Standard | 5 | 95.12 | 96.52 | 96.60 | 96.52 |
Robust | 1 | 95.13 | 94.29 | 94.45 | 94.30 |
Robust | 2 | 96.12 | 96.03 | 96.09 | 96.04 |
Robust | 5 | 95.12 | 96.25 | 96.36 | 96.26 |
Normalisation Applied | Number of Lags | Accuracy (%) | Recall (%) | Precision (%) | F1-Score (%) | Time (s) |
---|---|---|---|---|---|---|
Min–Max | 1 | 90.00 | 89.77 | 90.21 | 89.77 | 1494.15 |
Min–Max | 2 | 90.13 | 91.35 | 91.98 | 91.19 | 1504.03 |
Min–Max | 5 | 88.38 | 89.26 | 89.76 | 89.24 | 1480.25 |
Standard | 1 | 90.88 | 95.56 | 95.60 | 95.56 | 1500.59 |
Standard | 2 | 93.63 | 96.90 | 96.92 | 96.90 | 1507.95 |
Standard | 5 | 93.12 | 96.22 | 96.30 | 96.22 | 1473.28 |
Robust | 1 | 90.07 | 94.82 | 94.90 | 94.82 | 1484.23 |
Robust | 2 | 90.75 | 95.65 | 95.81 | 95.64 | 1504.35 |
Robust | 5 | 88.97 | 94.66 | 95.13 | 94.63 | 1470.04 |
Normalisation Applied | Number of Lags | Accuracy (%) | Recall (%) | Precision (%) | F1-Score (%) |
---|---|---|---|---|---|
Min–Max | 1 | 93.63 | 91.79 | 92.01 | 91.84 |
Min–Max | 2 | 93.50 | 93.65 | 93.80 | 93.67 |
Min–Max | 5 | 90.62 | 91.84 | 92.38 | 91.85 |
Standard | 1 | 95.12 | 95.83 | 95.89 | 95.84 |
Standard | 2 | 97.88 | 96.94 | 96.98 | 96.95 |
Standard | 5 | 96.00 | 96.84 | 96.94 | 96.84 |
Robust | 1 | 94.13 | 95.79 | 95.84 | 95.79 |
Robust | 2 | 97.37 | 96.61 | 96.64 | 96.61 |
Robust | 5 | 96.25 | 96.05 | 96.20 | 96.04 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zambudio Martínez, M.; Marin-Perez, R.; Skarmeta Gomez, A.F. Development of an In-Vehicle Intrusion Detection Model Integrating Federated Learning and LSTM Networks. Information 2025, 16, 292. https://doi.org/10.3390/info16040292
Zambudio Martínez M, Marin-Perez R, Skarmeta Gomez AF. Development of an In-Vehicle Intrusion Detection Model Integrating Federated Learning and LSTM Networks. Information. 2025; 16(4):292. https://doi.org/10.3390/info16040292
Chicago/Turabian StyleZambudio Martínez, Miriam, Rafael Marin-Perez, and Antonio Fernando Skarmeta Gomez. 2025. "Development of an In-Vehicle Intrusion Detection Model Integrating Federated Learning and LSTM Networks" Information 16, no. 4: 292. https://doi.org/10.3390/info16040292
APA StyleZambudio Martínez, M., Marin-Perez, R., & Skarmeta Gomez, A. F. (2025). Development of an In-Vehicle Intrusion Detection Model Integrating Federated Learning and LSTM Networks. Information, 16(4), 292. https://doi.org/10.3390/info16040292