Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (94)

Search Parameters:
Keywords = network firewalls

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 1812 KB  
Article
An Integrated Hybrid Deep Learning Framework for Intrusion Detection in IoT and IIoT Networks Using CNN-LSTM-GRU Architecture
by Doaa Mohsin Abd Ali Afraji, Jaime Lloret and Lourdes Peñalver
Computation 2025, 13(9), 222; https://doi.org/10.3390/computation13090222 - 14 Sep 2025
Viewed by 612
Abstract
Intrusion detection systems (IDSs) are critical for securing modern networks, particularly in IoT and IIoT environments where traditional defenses such as firewalls and encryption are insufficient against evolving cyber threats. This paper proposes an enhanced hybrid deep learning model that integrates convolutional neural [...] Read more.
Intrusion detection systems (IDSs) are critical for securing modern networks, particularly in IoT and IIoT environments where traditional defenses such as firewalls and encryption are insufficient against evolving cyber threats. This paper proposes an enhanced hybrid deep learning model that integrates convolutional neural networks (CNNs), Long Short-Term Memory (LSTM), and Gated Recurrent Units (GRU) in a multi-branch architecture designed to capture spatial and temporal dependencies while minimizing redundant computations. Unlike conventional hybrid approaches, the proposed parallel–sequential fusion framework leverages the strengths of each component independently before merging features, thereby improving detection granularity and learning efficiency. A rigorous preprocessing pipeline is employed to handle real-world data challenges: missing values are imputed using median filling, class imbalance is mitigated through SMOTE (Synthetic Minority Oversampling Technique), and feature scaling is performed with Min–Max normalization to ensure convergence consistency. The methodology is validated on the TON_IoT and CICIDS2017 dataset, chosen for its diversity and realism in IoT/IIoT attack scenarios. Three hybrid models—CNN-LSTM, CNN-GRU, and the proposed CNN-LSTM-GRU—are assessed for binary and multiclass intrusion detection. Experimental results demonstrate that the CNN-LSTM-GRU architecture achieves superior performance, attaining 100% accuracy in binary classification and 97% in multiclass detection, with balanced precision, recall, and F1-scores across all classes. Furthermore, evaluation on the CICIDS2017 dataset confirms the model’s generalization ability, achieving 99.49% accuracy with precision, recall, and F1-scores of 0.9954, 0.9943, and 0.9949, respectively, outperforming CNN-LSTM and CNN-GRU baselines. Compared to existing IDS models, our approach delivers higher robustness, scalability, and adaptability, making it a promising candidate for next-generation IoT/IIoT security. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

42 pages, 2224 KB  
Article
Combined Dataset System Based on a Hybrid PCA–Transformer Model for Effective Intrusion Detection Systems
by Hesham Kamal and Maggie Mashaly
AI 2025, 6(8), 168; https://doi.org/10.3390/ai6080168 - 24 Jul 2025
Cited by 1 | Viewed by 1163
Abstract
With the growing number and diversity of network attacks, traditional security measures such as firewalls and data encryption are no longer sufficient to ensure robust network protection. As a result, intrusion detection systems (IDSs) have become a vital component in defending against evolving [...] Read more.
With the growing number and diversity of network attacks, traditional security measures such as firewalls and data encryption are no longer sufficient to ensure robust network protection. As a result, intrusion detection systems (IDSs) have become a vital component in defending against evolving cyber threats. Although many modern IDS solutions employ machine learning techniques, they often suffer from low detection rates and depend heavily on manual feature engineering. Furthermore, most IDS models are designed to identify only a limited set of attack types, which restricts their effectiveness in practical scenarios where a network may be exposed to a wide array of threats. To overcome these limitations, we propose a novel approach to IDSs by implementing a combined dataset framework based on an enhanced hybrid principal component analysis–Transformer (PCA–Transformer) model, capable of detecting 21 unique classes, comprising 1 benign class and 20 distinct attack types across multiple datasets. The proposed architecture incorporates enhanced preprocessing and feature engineering, followed by the vertical concatenation of the CSE-CIC-IDS2018 and CICIDS2017 datasets. In this design, the PCA component is responsible for feature extraction and dimensionality reduction, while the Transformer component handles the classification task. Class imbalance was addressed using class weights, adaptive synthetic sampling (ADASYN), and edited nearest neighbors (ENN). Experimental results show that the model achieves 99.80% accuracy for binary classification and 99.28% for multi-class classification on the combined dataset (CSE-CIC-IDS2018 and CICIDS2017), 99.66% accuracy for binary classification and 99.59% for multi-class classification on the CSE-CIC-IDS2018 dataset, 99.75% accuracy for binary classification and 99.51% for multi-class classification on the CICIDS2017 dataset, and 99.98% accuracy for binary classification and 98.01% for multi-class classification on the NF-BoT-IoT-v2 dataset, significantly outperforming existing approaches by distinguishing a wide range of classes, including benign and various attack types, within a unified detection framework. Full article
Show Figures

Figure 1

23 pages, 650 KB  
Article
Exercise-Specific YANG Profile for AI-Assisted Network Security Labs: Bidirectional Configuration Exchange with Large Language Models
by Yuichiro Tateiwa
Information 2025, 16(8), 631; https://doi.org/10.3390/info16080631 - 24 Jul 2025
Viewed by 375
Abstract
Network security courses rely on hands-on labs where students configure virtual Linux networks to practice attack and defense. Automated feedback is scarce because no standard exists for exchanging detailed configurations—interfaces, bridging, routing tables, iptables policies—between exercise software and large language models (LLMs) that [...] Read more.
Network security courses rely on hands-on labs where students configure virtual Linux networks to practice attack and defense. Automated feedback is scarce because no standard exists for exchanging detailed configurations—interfaces, bridging, routing tables, iptables policies—between exercise software and large language models (LLMs) that could serve as tutors. We address this interoperability gap with an exercise-oriented YANG profile that augments the Internet Engineering Task Force (IETF) ietf-network module with a new network-devices module. The profile expresses Linux interface settings, routing, and firewall rules, and tags each node with roles such as linux-server or linux-firewall. Integrated into our LiNeS Cloud platform, it enables LLMs to both parse and generate machine-readable network states. We evaluated the profile on four topologies—from a simple client–server pair to multi-subnet scenarios with dedicated security devices—using ChatGPT-4o, Claude 3.7 Sonnet, and Gemini 2.0 Flash. Across 1050 evaluation tasks covering profile understanding (n = 180), instance analysis (n = 750), and instance generation (n = 120), the three LLMs answered correctly in 1028 cases, yielding an overall accuracy of 97.9%. Even with only minimal follow-up cues (≦3 turns) —rather than handcrafted prompt chains— analysis tasks reached 98.1% accuracy and generation tasks 93.3%. To our knowledge, this is the first exercise-focused YANG profile that simultaneously captures Linux/iptables semantics and is empirically validated across three proprietary LLMs, attaining 97.9% overall task accuracy. These results lay a practical foundation for artificial intelligence (AI)-assisted security labs where real-time feedback and scenario generation must scale beyond human instructor capacity. Full article
(This article belongs to the Special Issue AI Technology-Enhanced Learning and Teaching)
Show Figures

Figure 1

20 pages, 934 KB  
Article
Towards Efficient and Accurate Network Exposure Surface Analysis for Enterprise Networks
by Zhihua Wang, Minghui Jin, Youlin Hu, Dacheng Shan, Lizhao You and Peijun Chen
Electronics 2025, 14(12), 2409; https://doi.org/10.3390/electronics14122409 - 12 Jun 2025
Viewed by 511
Abstract
Network exposure surface analysis aims to identify network assets that are exposed to the Internet and is critical for enterprise security. However, existing tools face two key challenges: combinatorial explosion in traditional packet testing, and high false positive rates in firewall-based static analysis. [...] Read more.
Network exposure surface analysis aims to identify network assets that are exposed to the Internet and is critical for enterprise security. However, existing tools face two key challenges: combinatorial explosion in traditional packet testing, and high false positive rates in firewall-based static analysis. To address these issues, this paper proposes a network model-based approach to accurately characterize the forwarding behaviors of devices in enterprise networks, and performs network-level static analysis on the established graph model. Specifically, we construct a device-level forwarding graph using detailed element models for switches and firewalls, capturing the semantics of the forwarding information base, virtual routing and forwarding, virtual systems, and security zones. We further introduce a parallelized multi-threaded breadth-first search (MTBFS) algorithm to efficiently identify reachable assets from Internet-facing ingress interfaces. Experimental results demonstrate a 20× speedup over traditional methods in a large-scale enterprise network consisting of 7970 switches and 16 Internet-facing interfaces. Full article
(This article belongs to the Special Issue Advancements in Network and Data Security)
Show Figures

Figure 1

23 pages, 5235 KB  
Article
Tunable All-Optical Pattern Recognition System Based on Nonlinear Optical Loop Mirror for Bit-Flip BPSK Targets
by Ying Tang, Ziyi Kang, Xin Li, Ningjing Liang, Jinyong Chang and Genqing Bian
Photonics 2025, 12(4), 342; https://doi.org/10.3390/photonics12040342 - 3 Apr 2025
Cited by 1 | Viewed by 454
Abstract
As the basic physical infrastructure of various networks, optical networks are crucial to the advancement of information technology. Meanwhile, as new technologies emerge, the security of optical networks is facing serious threats. To improve the security of optical networks, optoelectronic firewalls primarily leverage [...] Read more.
As the basic physical infrastructure of various networks, optical networks are crucial to the advancement of information technology. Meanwhile, as new technologies emerge, the security of optical networks is facing serious threats. To improve the security of optical networks, optoelectronic firewalls primarily leverage all-optical pattern recognition to perform direct detection and analysis of data transmitted through the optical network at the optical layer. However, the current all-optical pattern recognition system still faces some problems when deployed in optical networks, including phase-lockingand relatively low recognition efficiency and scalability. In this paper, we propose a tunable all-optical pattern recognition system based on a nonlinear optical loop mirror (NOLM) for bit-flip BPSK targets. The operational principles and simulation setup of the proposed system are comprehensively described. Numerical simulations demonstrate that the system can accurately recognize and determine the position of 4-bit and 8-bit bit-flip BPSK targets in 16-bit input data with tunable frequencies of 192.8 THz and 193.4 THz at a data rate of 100 Gbps. Finally, the impact of input noise is evaluated by extinction ratio (ER), contrast ratio (CR), Q factor, bit error rate (BER), amplitude modulation (AM), and signal-to-noise ratio (SNR) under both frequencies. Full article
Show Figures

Figure 1

17 pages, 1790 KB  
Article
Advancing Artificial Intelligence of Things Security: Integrating Feature Selection and Deep Learning for Real-Time Intrusion Detection
by Faisal Albalwy and Muhannad Almohaimeed
Systems 2025, 13(4), 231; https://doi.org/10.3390/systems13040231 - 28 Mar 2025
Cited by 2 | Viewed by 2129
Abstract
The size of data transmitted through various communication systems has recently increased due to technological advancements in the Artificial Intelligence of Things (AIoT) and the industrial Internet of Things (IoT). IoT communications rely on intrusion detection systems (IDS) to ensure secure and reliable [...] Read more.
The size of data transmitted through various communication systems has recently increased due to technological advancements in the Artificial Intelligence of Things (AIoT) and the industrial Internet of Things (IoT). IoT communications rely on intrusion detection systems (IDS) to ensure secure and reliable data transmission, as traditional security mechanisms, such as firewalls and encryption, remain susceptible to attacks. An effective IDS is crucial as evolving threats continue to expose new security vulnerabilities. This study proposes an integrated approach combining feature selection methods and principal component analysis (PCA) with advanced deep learning (DL) models for real-time intrusion detection, significantly improving both computational efficiency and accuracy compared to previous methods. Specifically, five feature selection methods (correlation-based feature subset selection (CFS), Pearson analysis, gain ratio (GR), information gain (IG) and symmetrical uncertainty (SU)) were integrated with PCA to optimise feature dimensionality and enhance predictive performance. Three classifiers—artificial neural networks (ANNs), deep neural networks (DNNs), and TabNet–were evaluated on the RT-IoT2022 dataset. The ANN classifier combined with Pearson analysis and PCA achieved the highest intrusion detection accuracy of 99.7%, demonstrating substantial performance improvements over ANN alone (92%) and TabNet (94%) without feature selection. Key features identified by Pearson analysis included id.resp_p, service, fwd_init_window_size and flow_SYN_flag_count, which significantly contributed to the performance gains. These results indicate that combining Pearson analysis with PCA consistently improves classification performance across multiple models. Furthermore, the deployment of classifiers directly on the original dataset decreased the accuracy, emphasising the importance of feature selection in enhancing AIoT and IoT security. This predictive model strengthens IDS capabilities, enabling early threat detection and proactive mitigation strategies against cyberattacks in real-time AIoT environments. Full article
(This article belongs to the Special Issue Integration of Cybersecurity, AI, and IoT Technologies)
Show Figures

Figure 1

29 pages, 1935 KB  
Article
Enhancing Security in 5G Edge Networks: Predicting Real-Time Zero Trust Attacks Using Machine Learning in SDN Environments
by Fiza Ashfaq, Muhammad Wasim, Mumtaz Ali Shah, Abdul Ahad and Ivan Miguel Pires
Sensors 2025, 25(6), 1905; https://doi.org/10.3390/s25061905 - 19 Mar 2025
Cited by 1 | Viewed by 1738
Abstract
The Internet has been vulnerable to several attacks as it has expanded, including spoofing, viruses, malicious code attacks, and Distributed Denial of Service (DDoS). The three main types of attacks most frequently reported in the current period are viruses, DoS attacks, and DDoS [...] Read more.
The Internet has been vulnerable to several attacks as it has expanded, including spoofing, viruses, malicious code attacks, and Distributed Denial of Service (DDoS). The three main types of attacks most frequently reported in the current period are viruses, DoS attacks, and DDoS attacks. Advanced DDoS and DoS attacks are too complex for traditional security solutions, such as intrusion detection systems and firewalls, to detect. The combination of machine learning methods with AI-based machine learning has led to the introduction of several novel attack detection systems. Due to their remarkable performance, machine learning models, in particular, have been essential in identifying DDoS attacks. However, there is a considerable gap in the work on real-time detection of such attacks. This study uses Mininet with the POX Controller to simulate an environment to detect DDoS attacks in real-time settings. The CICDDoS2019 dataset identifies and classifies such attacks in the simulated environment. In addition, a virtual software-defined network (SDN) is used to collect network information from the surrounding area. When an attack occurs, the pre-trained models are used to analyze the traffic and predict the attack in real-time. The performance of the proposed methodology is evaluated based on two metrics: accuracy and detection time. The results reveal that the proposed model achieves an accuracy of 99% within 1 s of the detection time. Full article
(This article belongs to the Special Issue Cybersecurity and Privacy Protection: The Key to IoT Sensors)
Show Figures

Figure 1

24 pages, 1666 KB  
Review
An Overview of Distributed Firewalls and Controllers Intended for Mobile Cloud Computing
by Cyril Godwin Suetor, Daniele Scrimieri, Amna Qureshi and Irfan-Ullah Awan
Appl. Sci. 2025, 15(4), 1931; https://doi.org/10.3390/app15041931 - 13 Feb 2025
Viewed by 1500
Abstract
Mobile cloud computing (MCC) is a representation of the interaction between cloud computing and mobile devices, reshaping the utilisation of technology for consumers and businesses. This level of mobility and decentralisation of devices in MCC necessitates a highly secured framework to facilitate it. [...] Read more.
Mobile cloud computing (MCC) is a representation of the interaction between cloud computing and mobile devices, reshaping the utilisation of technology for consumers and businesses. This level of mobility and decentralisation of devices in MCC necessitates a highly secured framework to facilitate it. This literature review on distributed firewalls and controllers for mobile cloud computing reveals the critical need for a security framework tailored to the dynamic and decentralised nature of MCC. This study further emphasises the importance of integrating distributed firewalls with central controllers to address the unique security challenges in MCC, such as nomadic device behaviour and resource allocation optimisation. Additionally, it highlights the significance of Cloud Access Security Brokers (CASBs) in improving data security and ensuring compliance within mobile cloud applications. This review also addresses specific research questions related to security concerns, scalable framework development, and the effectiveness of distributed firewall and controller systems in MCC. It explores the complexities involved in merging Software-Defined Networking (SDN), Network Function Virtualisation (NFV), and CASB into a cohesive system, focusing on the need to resolve interoperability issues and maintain low latency and high throughput while balancing performance across distributed firewalls and controllers. The review also points to the necessity of privacy-preserving methods within CASB to uphold privacy standards in MCC. Furthermore, it identifies the integration of NFV and SDN as crucial for enhancing security and performance in MCC environments, and stresses the importance of future research directions, such as the incorporation of machine learning and edge computing, to further improve the security and efficiency of MCC systems. To the best of our knowledge, this review is the first to comprehensively examine the integration of these advanced technologies within the context of MCC. Full article
Show Figures

Figure 1

61 pages, 10098 KB  
Article
Segmentation and Filtering Are Still the Gold Standard for Privacy in IoT—An In-Depth STRIDE and LINDDUN Analysis of Smart Homes
by Henrich C. Pöhls, Fabian Kügler, Emiliia Geloczi and Felix Klement
Future Internet 2025, 17(2), 77; https://doi.org/10.3390/fi17020077 - 10 Feb 2025
Viewed by 1302
Abstract
Every year, more and more electronic devices are used in households, which certainly leads to an increase in the total number of communications between devices. During communication, a huge amount of information is transmitted, which can be critical or even malicious. To avoid [...] Read more.
Every year, more and more electronic devices are used in households, which certainly leads to an increase in the total number of communications between devices. During communication, a huge amount of information is transmitted, which can be critical or even malicious. To avoid the transmission of unnecessary information, a filtering mechanism can be applied. Filtering is a long-standing method used by network engineers to segregate and thus block unwanted traffic from reaching certain devices. In this work, we show how to apply this to the Internet of Things (IoT) Smart Home domain as it introduces numerous networked devices into our daily lives. To analyse the positive influence of filtering on security and privacy, we offer the results from our in-depth STRIDE and LINDDUN analysis of several Smart Home scenarios before and after the application. To show that filtering can be applied to other IoT domains, we offer a brief glimpse into the domain of smart cars. Full article
(This article belongs to the Special Issue Privacy and Security in Computing Continuum and Data-Driven Workflows)
Show Figures

Figure 1

38 pages, 2036 KB  
Article
Advancing Cybersecurity with Honeypots and Deception Strategies
by Zlatan Morić, Vedran Dakić and Damir Regvart
Informatics 2025, 12(1), 14; https://doi.org/10.3390/informatics12010014 - 31 Jan 2025
Cited by 4 | Viewed by 12115
Abstract
Cybersecurity threats are becoming more intricate, requiring preemptive actions to safeguard digital assets. This paper examines the function of honeypots as critical instruments for threat detection, analysis, and mitigation. A novel methodology for comparative analysis of honeypots is presented, offering a systematic framework [...] Read more.
Cybersecurity threats are becoming more intricate, requiring preemptive actions to safeguard digital assets. This paper examines the function of honeypots as critical instruments for threat detection, analysis, and mitigation. A novel methodology for comparative analysis of honeypots is presented, offering a systematic framework to assess their efficacy. Seven honeypot solutions, namely Dionaea, Cowrie, Honeyd, Kippo, Amun, Glastopf, and Thug, are analyzed, encompassing various categories, including SSH and HTTP honeypots. The solutions are assessed via simulated network attacks and comparative analyses based on established criteria, including detection range, reliability, scalability, and data integrity. Dionaea and Cowrie exhibited remarkable versatility and precision, whereas Honeyd revealed scalability benefits despite encountering data quality issues. The research emphasizes the smooth incorporation of honeypots with current security protocols, including firewalls and incident response strategies, while offering comprehensive insights into attackers’ tactics, techniques, and procedures (TTPs). Emerging trends are examined, such as incorporating machine learning for adaptive detection and creating cloud-based honeypots. Recommendations for optimizing honeypot deployment include strategic placement, comprehensive monitoring, and ongoing updates. This research provides a detailed framework for selecting and implementing honeypots customized to organizational requirements. Full article
Show Figures

Figure 1

26 pages, 559 KB  
Article
A Petri Net and LSTM Hybrid Approach for Intrusion Detection Systems in Enterprise Networks
by Gaetano Volpe, Marco Fiore, Annabella la Grasta, Francesca Albano, Sergio Stefanizzi, Marina Mongiello and Agostino Marcello Mangini
Sensors 2024, 24(24), 7924; https://doi.org/10.3390/s24247924 - 11 Dec 2024
Cited by 1 | Viewed by 1710
Abstract
Intrusion Detection Systems (IDSs) are a crucial component of modern corporate firewalls. The ability of IDS to identify malicious traffic is a powerful tool to prevent potential attacks and keep a corporate network secure. In this context, Machine Learning (ML)-based methods have proven [...] Read more.
Intrusion Detection Systems (IDSs) are a crucial component of modern corporate firewalls. The ability of IDS to identify malicious traffic is a powerful tool to prevent potential attacks and keep a corporate network secure. In this context, Machine Learning (ML)-based methods have proven to be very effective for attack identification. However, traditional approaches are not always applicable in a real-time environment as they do not integrate concrete traffic management after a malicious packet pattern has been identified. In this paper, a novel combined approach to both identify and discard potential malicious traffic in a real-time fashion is proposed. In detail, a Long Short-Term Memory (LSTM) supervised artificial neural network model is provided in which consecutive packet groups are considered as they flow through the corporate network. Moreover, the whole IDS architecture is modeled by a Petri Net (PN) that either blocks or allows packet flow throughout the network based on the LSTM model output. The novel hybrid approach combining LSTM with Petri Nets achieves a 99.71% detection accuracy—a notable improvement over traditional LSTM-only methods, which averaged around 97%. The LSTM–Petri Net approach is an innovative solution combining machine learning with formal network modeling for enhanced threat detection, offering improved accuracy and real-time adaptability to meet the rapid security needs of virtual environments and CPS. Moreover, the approach emphasizes the innovative role of the Intrusion Detection System (IDS) and Intrusion Prevention System (IPS) as a form of “virtual sensing technology” applied to advanced network security. An extensive case study with promising results is provided by training the model with the popular IDS 2018 dataset. Full article
(This article belongs to the Special Issue Virtual Reality and Sensing Techniques for Human)
Show Figures

Figure 1

19 pages, 1186 KB  
Article
PrismParser: A Framework for Implementing Efficient P4-Programmable Packet Parsers on FPGA
by Parisa Mashreghi-Moghadam, Tarek Ould-Bachir and Yvon Savaria
Future Internet 2024, 16(9), 307; https://doi.org/10.3390/fi16090307 - 27 Aug 2024
Viewed by 1404
Abstract
The increasing complexity of modern networks and their evolving needs demand flexible, high-performance packet processing solutions. The P4 language excels in specifying packet processing in software-defined networks (SDNs). Field-programmable gate arrays (FPGAs) are ideal for P4-based packet parsers due to their reconfigurability and [...] Read more.
The increasing complexity of modern networks and their evolving needs demand flexible, high-performance packet processing solutions. The P4 language excels in specifying packet processing in software-defined networks (SDNs). Field-programmable gate arrays (FPGAs) are ideal for P4-based packet parsers due to their reconfigurability and ability to handle data transmitted at high speed. This paper introduces three FPGA-based P4-programmable packet parsing architectural designs that translate P4 specifications into adaptable hardware implementations called base, overlay, and pipeline, each optimized for different packet parsing performance. As modern network infrastructures evolve, the need for multi-tenant environments becomes increasingly critical. Multi-tenancy allows multiple independent users or organizations to share the same physical network resources while maintaining isolation and customized configurations. The rise of 5G and cloud computing has accelerated the demand for network slicing and virtualization technologies, enabling efficient resource allocation and management for multiple tenants. By leveraging P4-programmable packet parsers on FPGAs, our framework addresses these challenges by providing flexible and scalable solutions for multi-tenant network environments. The base parser offers a simple design for essential packet parsing, using minimal resources for high-speed processing. The overlay parser extends the base design for parallel processing, supporting various bus sizes and throughputs. The pipeline parser boosts throughput by segmenting parsing into multiple stages. The efficiency of the proposed approaches is evaluated through detailed resource consumption metrics measured on an Alveo U280 board, demonstrating throughputs of 15.2 Gb/s for the base design, 15.2 Gb/s to 64.42 Gb/s for the overlay design, and up to 282 Gb/s for the pipelined design. These results demonstrate a range of high performances across varying throughput requirements. The proposed approach utilizes a system that ensures low latency and high throughput that yields streaming packet parsers directly from P4 programs, supporting parsing graphs with up to seven transitioning nodes and four connections between nodes. The functionality of the parsers was tested on enterprise networks, a firewall, and a 5G Access Gateway Function graph. Full article
(This article belongs to the Special Issue Convergence of Edge Computing and Next Generation Networking)
Show Figures

Figure 1

22 pages, 4992 KB  
Article
Increasing the Security of Network Data Transmission with a Configurable Hardware Firewall Based on Field Programmable Gate Arrays
by Marco Grossi, Fabrizio Alfonsi, Marco Prandini and Alessandro Gabrielli
Future Internet 2024, 16(9), 303; https://doi.org/10.3390/fi16090303 - 23 Aug 2024
Cited by 4 | Viewed by 1669
Abstract
One of the most common mitigations against network-borne security threats is the deployment of firewalls, i.e., systems that can observe traffic and apply rules to let it through if it is benign or drop packets that are recognized as malicious. Cheap and open-source [...] Read more.
One of the most common mitigations against network-borne security threats is the deployment of firewalls, i.e., systems that can observe traffic and apply rules to let it through if it is benign or drop packets that are recognized as malicious. Cheap and open-source (a feature that is greatly appreciated in the security world) software solutions are available but may be too slow for high-rate channels. Hardware appliances are efficient but opaque and they are often very expensive. In this paper, an open-hardware approach is proposed for the design of a firewall, implemented on off-the-shelf components such as an FPGA (the Xilinx KC705 development board), and it is tested using controlled Ethernet traffic created with a packet generator as well as with real internet traffic. The proposed system can filter packets based on a set of rules that can use the whitelist or blacklist approach. It generates a set of statistics, such as the number of received/transmitted packets and the amount of received/transmitted data, which can be used to detect potential anomalies in the network traffic. The firewall has been experimentally validated in the case of a network data throughput of 1 Gb/s, and preliminary simulations have shown that the system can be upgraded with minor modifications to work at 10 Gb/s. Test results have shown that the proposed firewall features a latency of 627 ns and a maximum data throughput of 0.982 Gb/s. Full article
(This article belongs to the Special Issue State-of-the-Art Future Internet Technology in Italy 2024–2025)
Show Figures

Figure 1

23 pages, 4001 KB  
Article
Enhancing Firewall Packet Classification through Artificial Neural Networks and Synthetic Minority Over-Sampling Technique: An Innovative Approach with Evaluative Comparison
by Adem Korkmaz, Selma Bulut, Tarık Talan, Selahattin Kosunalp and Teodor Iliev
Appl. Sci. 2024, 14(16), 7426; https://doi.org/10.3390/app14167426 - 22 Aug 2024
Cited by 4 | Viewed by 2119
Abstract
Firewall packet classification is a critical component of network security, demanding precise and reliable methods to ensure optimal functionality. This study introduces an advanced approach that combines Artificial Neural Networks (ANNs) with various data balancing techniques, including the Synthetic Minority Over-sampling Technique (SMOTE), [...] Read more.
Firewall packet classification is a critical component of network security, demanding precise and reliable methods to ensure optimal functionality. This study introduces an advanced approach that combines Artificial Neural Networks (ANNs) with various data balancing techniques, including the Synthetic Minority Over-sampling Technique (SMOTE), ADASYN, and BorderlineSMOTE, to enhance the classification of firewall packets into four distinct classes: ‘allow’, ‘deny’, ‘drop’, and ‘reset-both’. Initial experiments without data balancing revealed that while the ANN model achieved perfect precision, recall, and F1-Scores for the ‘allow’, ‘deny’, and ‘drop’ classes, it struggled to accurately classify the ‘reset-both’ class. To address this, we applied SMOTE, ADASYN, and BorderlineSMOTE to mitigate class imbalance, which led to significant improvements in overall classification performance. Among the techniques, the ANN combined with BorderlineSMOTE demonstrated superior efficacy, achieving a 97% overall accuracy and consistently high performance across all classes, particularly in the accurate classification of minority classes. In contrast, while SMOTE and ADASYN also improved the model’s performance, the results with BorderlineSMOTE were notably more balanced and reliable. This study provides a comparative analysis with existing machine learning models, highlighting the effectiveness of the proposed approach in firewall packet classification. The synthesized results validate the potential of integrating ANNs with advanced data balancing techniques to enhance the robustness and reliability of network security systems. The findings underscore the importance of addressing class imbalance in machine learning models, particularly in security-critical applications, and offer valuable insights for the design and improvement of future network security infrastructures. Full article
(This article belongs to the Special Issue Progress and Research in Cybersecurity and Data Privacy)
Show Figures

Figure 1

22 pages, 522 KB  
Article
Revolutionizing SIEM Security: An Innovative Correlation Engine Design for Multi-Layered Attack Detection
by Muhammad Sheeraz, Muhammad Hanif Durad, Muhammad Arsalan Paracha, Syed Muhammad Mohsin, Sadia Nishat Kazmi and Carsten Maple
Sensors 2024, 24(15), 4901; https://doi.org/10.3390/s24154901 - 28 Jul 2024
Cited by 4 | Viewed by 6216
Abstract
Advances in connectivity, communication, computation, and algorithms are driving a revolution that will bring economic and social benefits through smart technologies of the Industry 4.0 era. At the same time, attackers are targeting this expanded cyberspace to exploit it. Therefore, many cyberattacks are [...] Read more.
Advances in connectivity, communication, computation, and algorithms are driving a revolution that will bring economic and social benefits through smart technologies of the Industry 4.0 era. At the same time, attackers are targeting this expanded cyberspace to exploit it. Therefore, many cyberattacks are reported each year at an increasing rate. Traditional security devices such as firewalls, intrusion detection systems (IDSs), intrusion prevention systems (IPSs), anti-viruses, and the like, often cannot detect sophisticated cyberattacks. The security information and event management (SIEM) system has proven to be a very effective security tool for detecting and mitigating such cyberattacks. A SIEM system provides a holistic view of the security status of a corporate network by analyzing log data from various network devices. The correlation engine is the most important module of the SIEM system. In this study, we propose the optimized correlator (OC), a novel correlation engine that replaces the traditional regex matching sub-module with a novel high-performance multiple regex matching library called “Hyperscan” for parallel log data scanning to improve the performance of the SIEM system. Log files of 102 MB, 256 MB, 512 MB, and 1024 MB, generated from log data received from various devices in the network, are input into the OC and simple event correlator (SEC) for applying correlation rules. The results indicate that OC is 21 times faster than SEC in real-time response and 2.5 times more efficient in execution time. Furthermore, OC can detect multi-layered attacks successfully. Full article
(This article belongs to the Special Issue Data Protection and Privacy in Industry 4.0 Era)
Show Figures

Figure 1

Back to TopTop