Network Security Situation Element Extraction Algorithm Based on Hybrid Deep Learning
Abstract
:1. Introduction
2. Relative Works
3. Network Security Situation Element Extraction Algorithm Based on CNN-LSTM-BP
3.1. Convolutional Neural Networks
- Input layer: The input layer receives and preprocesses data for the convolutional layer, ensuring effective feature learning by handling outliers and missing values.
- Convolution layer: The convolution process takes the position of the receptive field as the benchmark, extracts the features by sliding the convolution kernel at a specific step and multiplying it by the input data [21,22], and then activates the feature mapping through a nonlinear function. The design of convolution layer enables the convolution neural network to automatically learn and capture the spatial hierarchical features in the input data, providing a more meaningful representation for the subsequent network layer. To reduce the number of parameters, CNNs employ a strategy known as “shared parameters”. This parameter-sharing strategy not only helps to realize translation-invariant classification but also makes the model lighter, enhances its scalability, and improves the training speed and generalization ability of the model [23].
- Pooling layer: After feature extraction, you can choose to add a pooling layer to reduce the representation of the mapping or eliminate redundant features. Therefore, the pooling operation can realize the compression of the original data under the premise of minor changes to the original data, further decrease the number of relevant parameters, and simplify the network’s calculations [24].
3.2. Long Short-Term Memory Neural Networks
3.3. Backpropagation Algorithm
3.4. A Cybersecurity Situational Element Extraction Model Based on CNN-LSTM-BP
- (1)
- Preprocess data samples.
- (2)
- Convolution layer and pooling layer.
- (3)
- LSTM layer.
- (4)
- Full connection layer and BP layer.
- (5)
- Error propagation.
- (6)
- Model evaluation.
4. Experimental Results and Analysis
4.1. Data Preprocessing and Parameter Setting
4.2. Comparative Experiment and Result Analysis
- Effect comparison of situation element extraction models
- Comparison of classification effects of typical CNN models
- Ablation experiment
4.3. Algorithm Complexity Analysis
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Barak, I. Critical infrastructure under attack: Lessons from a honeypot. Netw. Secur. 2020, 2020, 16–17. [Google Scholar] [CrossRef]
- Yang, J.; Chen, K.; Cao, K.; Guo, X. The core technology analysis of industrial Internet security situational awareness. Cyberspace Secur. 2019, 10, 61–66. [Google Scholar]
- Endsley, M.R. Design and evaluation for situation awareness enhancement. In Proceedings of the Human Factors Society Annual Meeting, Sage CA, LA, USA, 24–28 October 1988; pp. 97–101. [Google Scholar]
- Bass, T.; Gruber, D. A glimpse into the future of id. Mag. USENIX SAGE 1999, 24, 40–49. [Google Scholar]
- You, J.B.; Kim, S.K.; Jun, H.I.; Suh, D.H. A Novel Way of Recognition and Avoidance of Risk Factors in Residential Environments. In Proceedings of the 2015 9th International Conference on Future Generation Communication and Networking (FGCN), Jeju, Republic of Korea, 25–28 November 2015; pp. 45–48. [Google Scholar]
- Alavizadeh, H.; Jang-Jaccard, J.; Enoch, S.Y.; Al-Sahaf, H.; Welch, I.; Camtepe, S.A.; Kim, D.D. A survey on cyber situation-awareness systems: Framework, techniques, and insights. ACM Comput. Surv. 2022, 55, 1–37. [Google Scholar] [CrossRef]
- Fukushima, K. Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biol. Cybern. 2020, 36, 193–202. [Google Scholar] [CrossRef] [PubMed]
- Waskle, S.; Parashar, L.; Singh, U. Intrusion detection system using PCA with random forest approach. In Proceedings of the 2020 International Conference on Electronics and Sustainable Communication Systems (ICESC), Coimbatore, India, 2–4 July 2020; pp. 803–808. [Google Scholar]
- Udas, P.B.; Karim, M.E.; Roy, K.S. SPIDER: A shallow PCA based network intrusion detection system with enhanced recurrent neural networks. J. King Saud Univ. Comput. Inf. Sci. 2022, 34, 10246–10272. [Google Scholar] [CrossRef]
- Singh, A.; Nagar, J.; Amutha, J.; Sharma, S. P2CA-GAM-ID: Coupling of probabilistic principal components analysis with generalised additive model to predict the k− barriers for intrusion detection. Eng. Appl. Artif. Intell. 2023, 126, 107137. [Google Scholar] [CrossRef]
- Wang, R.; Ma, C.; Wu, P. An intrusion detection method based on federated learning and convolutional neural network. Netinfo Secur. 2020, 20, 47–54. [Google Scholar]
- Sikiru, I.A.; Kora, A.D.; Ezin, E.C.; Imoize, A.L.; Li, C.-T. Hybridization of Learning Techniques and Quantum Mechanism for IIoT Security: Applications, Challenges, and Prospects. Electronics 2024, 13, 4153. [Google Scholar] [CrossRef]
- Lopes, I.O.; Zou, D.; Abdulqadder, I.H.; Ruambo, F.A.; Yuan, B. Effective network intrusion detection via representation learning: A Denoising AutoEncoder approach. Comput. Commun. 2022, 194, 55–65. [Google Scholar] [CrossRef]
- D’Agostino, P.; Violante, M.; Macario, G. A Scalable Fog Computing Solution for Industrial Predictive Maintenance and Customization. Electronics 2025, 14, 24. [Google Scholar] [CrossRef]
- Liu, Y.; Sun, Y.; Liu, C.; Weng, Y. Industrial Internet Security Situation Assessment Method Based on Self-Attention Mechanism. Proceedings of 2024 3rd International Conference on Artificial Intelligence, Internet of Things and Cloud Computing Technology (AIoTC), Wuhan, China, 13–15 September 2024; pp. 148–151. [Google Scholar]
- Yang, Y.; Yao, C.; Yang, J.; Yin, K. A network security situation element extraction method based on conditional generative adversarial network and transformer. IEEE Access 2022, 10, 107416–107430. [Google Scholar] [CrossRef]
- Taheri, R.; Ahmadzadeh, M.; Kharazmi, M. A new approach for feature selection in intrusion detection system. Fen Bilim. Derg. (CFD) 2015, 36, 1344–1357. [Google Scholar]
- LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
- Gu, J.; Wang, Z.; Kuen, J.; Ma, L.; Shahroudy, A.; Shuai, B.; Chen, T. Recent advances in convolutional neural networks. Pattern Recognit. 2018, 77, 354–377. [Google Scholar] [CrossRef]
- LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef]
- LeCun, Y.; Kavukcuoglu, K.; Farabet, C. Convolutional networks and applications in vision. In Proceedings of the 2010 IEEE International Symposium on Circuits and Systems, Paris, France, 30 May–2 June 2010; pp. 253–256. [Google Scholar]
- Gao, Y.; Rong, W.; Shen, Y.; Xiong, Z. Convolutional neural network based sentiment analysis using Adaboost combination. In Proceedings of the 2016 International Joint Conference on Neural Networks (IJCNN), Vancouver, BC, Canada, 24–29 July 2016; pp. 1333–1338. [Google Scholar]
- Song, J.; Park, S.; Lim, M. Detection of Limit Situation in Segmentation Network via CNN. In Proceedings of the 2020 20th International Conference on Control, Automation and Systems (ICCAS), Busan, Republic of Korea, 13–16 October2020; pp. 892–894. [Google Scholar]
- Yu, D.; Wang, H.; Chen, P.; Wei, Z. Mixed pooling for convolutional neural networks. In Proceedings of the Rough Sets and Knowledge Technology: 9th International Conference, RSKT 2014, Shanghai, China, 24–26 October 2014; pp. 364–375. [Google Scholar]
- Haralabopoulos, G.; Razis, G.; Anagnostopoulos, I. A Modified Long Short-Term Memory Cell. Int. J. Neural Syst. 2023, 33, 2350039. [Google Scholar] [CrossRef] [PubMed]
- Graves, A.; Graves, A. Long short-term memory. Superv. Seq. Label. Recurr. Neural Netw. 2012, 385, 37–45. [Google Scholar]
- Rumelhart, D.E.; Hinton, G.E.; Williams, R.J. Learning representations by back-propagating errors. Nature 1986, 323, 533–536. [Google Scholar] [CrossRef]
- McInerney, J.M.; Haines, K.G.; Biafore, S.; Hecht-Nielsen, R. Back propagation error surfaces can have local minima. In Proceedings of the International 1989 Joint Conference on Neural Networks, Washington, DC, USA, 18–22 June 1989; p. 627. [Google Scholar]
- Ioffe, S. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv 2015, arXiv:1502.03167. [Google Scholar]
- Nair, V.; Hinton, G.E. Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), Haifa, Israel, 21–24 June 2010; pp. 807–814. [Google Scholar]
- Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Dropout: A simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 2014, 15, 1929–1958. [Google Scholar]
- Siddique, K.; Akhtar, Z.; Khan, F.A.; Kim, Y. KDD cup 99 data sets: A perspective on the role of data sets in network intrusion detection research. Computer 2019, 52, 41–51. [Google Scholar] [CrossRef]
- Morris, T.; Gao, W. Industrial control system traffic data sets for intrusion detection research. In Proceedings of the Critical Infrastructure Protection VIII: 8th IFIP WG 11.10 International Conference, ICCIP 2014, Arlington, VA, USA, 17–19 March 2014; pp. 65–78. [Google Scholar]
- Parsaei, M.; Taheri, R.; Javidan, R. Perusing the effect of discretization of data on accuracy of predicting naive bayes algorithm. J. Curr. Res. Sci. 2016, 2016, 457. [Google Scholar]
- Specht, D.F. Probabilistic neural networks. Neural Netw. 1990, 3, 109–118. [Google Scholar] [CrossRef]
- Peterson, L.E. K-nearest neighbor. Scholarpedia 2009, 4, 1883. [Google Scholar] [CrossRef]
- Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
- Subakan, C.; Ravanelli, M.; Cornell, S.; Bronzi, M.; Zhong, J. Attention is all you need in speech separation. In Proceedings of the ICASSP 2021–2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Toronto, ON, Canada, 6–11 June 2021; pp. 21–25. [Google Scholar]
- Cao, B.; Li, C.; Song, Y.; Qin, Y.; Chen, C. Network intrusion detection model based on CNN and GRU model. Appl. Sci. 2022, 12, 4184. [Google Scholar] [CrossRef]
- Cai, Z.; Si, Y.; Zhang, J.; Zhu, L.; Li, P.; Feng, Y. Industrial Internet Intrusion Detection Based on Res-CNN-SRU. Electronics 2023, 12, 3267. [Google Scholar] [CrossRef]
- Kranthi Kumar, K.; Bharadwaj, R.; Ch, S.; Sujana, S. Effective deep learning approach based on VGG-mini architecture for iris recognition. Ann. Rom. Soc. Cell Biol. 2021, 25, 4718–4726. [Google Scholar]
Label | Label Description | Encode | Number of Samples in the Training Set | Number of Samples in the Test Set |
---|---|---|---|---|
Normal | Normal data | 0 | 78,903 | 19,395 |
DOS | Denial-of-service attack | 1 | 312,906 | 78,552 |
Probing | Surveillance and other probing | 2 | 1997 | 521 |
R2L | Unauthorized access from a remote machine | 3 | 82 | 24 |
U2R | Unauthorized access to local superuser (root) privileges | 4 | 1328 | 313 |
Label | Label Description | Encode | Number of Samples in the Training Set | Number of Samples in the Test Set |
---|---|---|---|---|
Normal | Normal data | 0 | 5340 | 1335 |
NMRI | Naive malicious response injection attack | 1 | 268 | 67 |
CMRI | Complex malicious response injection attack | 2 | 1331 | 333 |
MSCI | Malicious state command injection attack | 3 | 74 | 19 |
MPCI | Malicious parameter command injection attack | 4 | 674 | 168 |
MFCI | Malicious function command injection attack | 5 | 32 | 8 |
DOS | Denial-of-service attack | 6 | 150 | 37 |
Recon | Reconnaissance attack | 7 | 626 | 157 |
Evaluation (%) | ACC | Recall | Precision | F1-Score |
---|---|---|---|---|
PNN | 90.76 | 90.76 | 98.18 | 94.32 |
KNN | 90.19 | 90.18 | 98.20 | 94.02 |
RF | 87.95 | 87.94 | 89.03 | 88.48 |
Transformer | 91.01 | 91.01 | 91.85 | 91.43 |
CGAN-transformer | 93.07 | 93.07 | 94.29 | 93.68 |
Ours | 98.03 | 98.03 | 98.41 | 98.22 |
Evaluation (%) | ACC | Recall | Precision | F1-Score |
---|---|---|---|---|
PNN | 91.25 | 91.25 | 91.85 | 91.52 |
KNN | 93.44 | 93.44 | 94.07 | 93.75 |
RF | 89.98 | 89.98 | 90.17 | 90.07 |
Transformer | 95.07 | 95.07 | 91.85 | 93.43 |
CNN-GRU | 94.69 | 78.92 | 78.94 | 75.45 |
Res-CNN-SRU | 98.79 | 95.04 | 95.34 | 95.38 |
Ours | 98.96 | 98.96 | 98.53 | 98.74 |
Layer Name | LeNet5 | MiniVGGNet |
---|---|---|
Conv 1 | Kernel size: (3,3); out channel: 6; stride: 1 | Kernel size: (3,3); out channel: 64; stride: 1 |
Pooling 1 | Average pooling (2,2) | Max pooling (2,2) |
Conv 2 | Kernel size: (3,3); out channel: 16; stride: 1 | Kernel size: (3,3); out channel: 128; stride: 1 |
Pooling 2 | Average pooling (2,2) | Max pooling (2,2) |
Conv 3 | / | Kernel size: (3,3); out channel: 256; stride: 1 |
Pooling 3 | / | Max pooling (2,2) |
Fc 1 | Hiddensize × 256 | Hiddensize × 512 |
Fc 3 | 128 × n_class | n_class |
Dataset | KDD Cup99 | SCADA2014 | ||||||
---|---|---|---|---|---|---|---|---|
Evaluation (%) | ACC | Recall | Precision | F1-Score | ACC | Recall | Precision | F1-Score |
LeNet5 | 97.52 | 97.52 | 97.30 | 97.41 | 96.38 | 96.38 | 96.53 | 96.45 |
LeNet5-LSTM-BP | 97.66 | 97.66 | 97.83 | 97.74 | 96.52 | 96.52 | 96.40 | 96.46 |
MiniVGGNet | 80.60 | 80.60 | 87.50 | 83.91 | 92.58 | 92.58 | 93.01 | 92.79 |
MiniVGGNet-LSTM-BP | 97.30 | 97.30 | 97.25 | 97.27 | 95.29 | 95.29 | 96.47 | 95.88 |
CNN | 97.10 | 97.10 | 97.04 | 97.07 | 97.35 | 97.35 | 97.55 | 97.45 |
CNN-LSTM-BP | 98.03 | 98.03 | 98.41 | 98.22 | 98.96 | 98.96 | 98.53 | 98.74 |
Dataset | KDD Cup99 | SCADA2014 | ||||||
---|---|---|---|---|---|---|---|---|
Evaluation (%) | ACC | Recall | Precision | F1-Score | ACC | Recall | Precision | F1-Score |
BP | 80.25 | 80.25 | 87.14 | 83.55 | 90.13 | 90.13 | 92.46 | 91.20 |
CNN | 97.10 | 97.10 | 97.04 | 97.07 | 97.35 | 97.35 | 97.55 | 97.45 |
LSTM | 97.10 | 97.10 | 97.25 | 97.17 | 92.77 | 92.77 | 93.04 | 92.90 |
CNN-LSTM | 97.38 | 97.38 | 96.44 | 96.91 | 98.01 | 98.01 | 98.17 | 98.09 |
CNN-BP | 97.44 | 97.44 | 97.54 | 97.49 | 97.30 | 97.30 | 98.41 | 97.85 |
CNN-LSTM-BP | 98.03 | 98.03 | 98.41 | 98.22 | 98.96 | 98.96 | 98.53 | 98.74 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhang, R.; Wu, Q.; Zhou, Y. Network Security Situation Element Extraction Algorithm Based on Hybrid Deep Learning. Electronics 2025, 14, 553. https://doi.org/10.3390/electronics14030553
Zhang R, Wu Q, Zhou Y. Network Security Situation Element Extraction Algorithm Based on Hybrid Deep Learning. Electronics. 2025; 14(3):553. https://doi.org/10.3390/electronics14030553
Chicago/Turabian StyleZhang, Ran, Qianru Wu, and Yuwei Zhou. 2025. "Network Security Situation Element Extraction Algorithm Based on Hybrid Deep Learning" Electronics 14, no. 3: 553. https://doi.org/10.3390/electronics14030553
APA StyleZhang, R., Wu, Q., & Zhou, Y. (2025). Network Security Situation Element Extraction Algorithm Based on Hybrid Deep Learning. Electronics, 14(3), 553. https://doi.org/10.3390/electronics14030553