Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,337)

Search Parameters:
Keywords = malicious attacks

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 1432 KB  
Article
GATransformer: A Network Threat Detection Method Based on Graph-Sequence Enhanced Transformer
by Qigang Zhu, Xiong Zhan, Wei Chen, Yuanzhi Li, Hengwei Ouyang, Tian Jiang and Yu Shen
Electronics 2025, 14(19), 3807; https://doi.org/10.3390/electronics14193807 - 25 Sep 2025
Abstract
Emerging complex multi-step attacks such as Advanced Persistent Threats (APTs) pose significant risks to national economic development, security, and social stability. Effectively detecting these sophisticated threats is a critical challenge. While deep learning methods show promise in identifying unknown malicious behaviors, they often [...] Read more.
Emerging complex multi-step attacks such as Advanced Persistent Threats (APTs) pose significant risks to national economic development, security, and social stability. Effectively detecting these sophisticated threats is a critical challenge. While deep learning methods show promise in identifying unknown malicious behaviors, they often struggle with fragmented modal information, limited feature representation, and generalization. To address these limitations, we propose GATransformer, a new dual-modal detection method that integrates topological structure analysis with temporal sequence modeling. Its core lies in a cross-attention semantic fusion mechanism, which deeply integrates heterogeneous features and effectively mitigates the constraints of unimodal representations. GATransformer reconstructs network behavior representation via a parallel processing framework in which graph attention captures intricate spatial dependencies, and self-attention focuses on modeling long-range temporal correlations. Experimental results on the CIDDS-001 and CIDDS-002 datasets demonstrate the superior performance of our method compared to baseline methods with detection accuracies of 99.74% (nodes) and 88.28% (edges) on CIDDS-001 and 99.99% and 99.98% on CIDDS-002, respectively. Full article
(This article belongs to the Special Issue Advances in Information Processing and Network Security)
Show Figures

Figure 1

21 pages, 2310 KB  
Article
Development of a Model for Detecting Spectrum Sensing Data Falsification Attack in Mobile Cognitive Radio Networks Integrating Artificial Intelligence Techniques
by Lina María Yara Cifuentes, Ernesto Cadena Muñoz and Rafael Cubillos Sánchez
Algorithms 2025, 18(10), 596; https://doi.org/10.3390/a18100596 - 24 Sep 2025
Viewed by 117
Abstract
Mobile Cognitive Radio Networks (MCRNs) have emerged as a promising solution to address spectrum scarcity by enabling dynamic access to underutilized frequency bands assigned to Primary or Licensed Users (PUs). These networks rely on Cooperative Spectrum Sensing (CSS) to identify available spectrum, but [...] Read more.
Mobile Cognitive Radio Networks (MCRNs) have emerged as a promising solution to address spectrum scarcity by enabling dynamic access to underutilized frequency bands assigned to Primary or Licensed Users (PUs). These networks rely on Cooperative Spectrum Sensing (CSS) to identify available spectrum, but this collaborative approach also introduces vulnerabilities to security threats—most notably, Spectrum Sensing Data Falsification (SSDF) attacks. In such attacks, malicious nodes deliberately report false sensing information, undermining the reliability and performance of the network. This paper investigates the application of machine learning techniques to detect and mitigate SSDF attacks in MCRNs, particularly considering the additional challenges introduced by node mobility. We propose a hybrid detection framework that integrates a reputation-based weighting mechanism with Support Vector Machine (SVM) and K-Nearest Neighbors (KNN) classifiers to improve detection accuracy and reduce the influence of falsified data. Experimental results on software defined radio (SDR) demonstrate that the proposed method significantly enhances the system’s ability to identify malicious behavior, achieving high detection accuracy, reduces the rate of data falsification by approximately 5–20%, increases the probability of attack detection, and supports the dynamic creation of a blacklist to isolate malicious nodes. These results underscore the potential of combining machine learning with trust-based mechanisms to strengthen the security and reliability of mobile cognitive radio networks. Full article
Show Figures

Figure 1

39 pages, 4702 KB  
Article
DCmal-2025: A Novel Routing-Based DisConnectivity Malware—Development, Impact, and Countermeasures
by Mai Abu-Jazoh, Iman Almomani and Khair Eddin Sabri
Appl. Sci. 2025, 15(18), 10219; https://doi.org/10.3390/app151810219 - 19 Sep 2025
Viewed by 516
Abstract
Operating systems such as Windows, Linux, and macOS include built-in commands that enable administrators to perform essential tasks. These same commands can be exploited by attackers for malicious purposes that may go undetected by traditional security solutions. This research identifies an unmitigated risk [...] Read more.
Operating systems such as Windows, Linux, and macOS include built-in commands that enable administrators to perform essential tasks. These same commands can be exploited by attackers for malicious purposes that may go undetected by traditional security solutions. This research identifies an unmitigated risk of misuse of a standard command to disconnect network services on victim devices. Thus, we developed a novel Proof-of-Concept (PoC) malware named DCmal-2025 and documented every step of its lifecycle, including the core idea of the malware, its development, impact, analysis, and possible countermeasures. The proposed DCmal-2025 malware can cause a Denial-of-Service (DoS) condition without exploiting any software vulnerabilities; instead, it misuses legitimate standard commands and manipulates the routing table to achieve this. We developed two types of DCmal-2025: one that triggers a DoS immediately and another that initiates it after a predefined delay before restoring connectivity. This study evaluated 72 antivirus detection rates of two malware types (DCmal-2025 Type 1 and Type 2) written in C and Rust using VirusTotal. The source code for both types was undetected by any of the antivirus engines. However, after compiling the source code into executable files, only some Windows executables were flagged by general keywords unrelated to DCmal-2024 behaviour; Linux executables remained undetected. Rust significantly reduced detection rates compared to C—from 7.04% to 1.39% for Type 1 and from 9.72% to 4.17% for Type 2. An educational institution was chosen as a case study. The institution’s network topology was simulated using the GNS3 simulator. The result of the case study reveals that both malware types could cause a successful DoS attack by disconnecting targeted devices from all network-based services. The findings underscore the need for enhanced detection methods and heightened awareness that unexplained network disconnections may be caused by undetected malware, such as DCmal-2025. Full article
(This article belongs to the Special Issue Approaches to Cyber Attacks and Malware Detection)
Show Figures

Figure 1

23 pages, 3656 KB  
Article
DDoS Attacks Detection in SDN Through Network Traffic Feature Selection and Machine Learning Models
by Edith Paola Estupiñán Cuesta, Juan Carlos Martínez Quintero and Juan David Avilés Palma
Telecom 2025, 6(3), 69; https://doi.org/10.3390/telecom6030069 - 19 Sep 2025
Viewed by 390
Abstract
This research presents a methodology for the detection of distributed denial-of-service (DDoS) attacks in software-defined networks (SDNs). An SDN was configured using the Mininet simulator, the Open Daylight controller, and a web server, which acted as the target to execute a DDoS attack [...] Read more.
This research presents a methodology for the detection of distributed denial-of-service (DDoS) attacks in software-defined networks (SDNs). An SDN was configured using the Mininet simulator, the Open Daylight controller, and a web server, which acted as the target to execute a DDoS attack on the HTTP protocol. The attack tools GoldenEye, Slowloris, HULK, Slowhttptest, and XerXes were used, and two datasets were built using the CICFlowMeter and NTLFlowLyzer flow and feature generation tools, with 424,922 and 731,589 flows, respectively, as well as two independent test datasets. These tools were used to compare their functionalities and efficiency in generating flows and features. Finally, the XGBoost and Random Forest models were evaluated with each dataset, with the objective of identifying the model that provides the best classification result in the detection of malicious traffic. For the XGBoost model, the accuracy results were 99.48% and 97.61%, while for the Random Forest model, better results were obtained with 99.97% and 99.99% using the CIC-Dataset and NTL-Dataset, respectively, in both cases. This allows determining that the Random Forest model outperformed XGBoost in classification, as it achieved the lowest false negative rate of 0.00001 using the NTL-Dataset. Full article
Show Figures

Figure 1

17 pages, 983 KB  
Article
Multidimensional Fault Injection and Simulation Analysis for Random Number Generators
by Xianli Xie, Jiansheng Chen, Jiajun Zhou, Ruiqing Zhai and Xianzhao Xia
Electronics 2025, 14(18), 3702; https://doi.org/10.3390/electronics14183702 - 18 Sep 2025
Viewed by 232
Abstract
Random number generators play a critical role in ensuring information security, supporting encrypted communications, and preventing data leakage. However, the random number generators widely used in hardware are faced with potential threats such as environmental disturbances and fault injection attacks. Especially in automotive-grade [...] Read more.
Random number generators play a critical role in ensuring information security, supporting encrypted communications, and preventing data leakage. However, the random number generators widely used in hardware are faced with potential threats such as environmental disturbances and fault injection attacks. Especially in automotive-grade environments, chips encounter threat scenarios involving multidimensional fault injection, which may lead to functional failures or malicious exploitation, endangering the security of the entire system. This paper focuses on a Counter Mode Deterministic Random Bit Generator (CTR-DRBG) based on the AES-128 algorithm and implements a hardware prototype system compliant with the NIST SP 800-22 standard on an FPGA platform. Centering on typical fault modes such as temperature disturbances, voltage glitches, electromagnetic interference, and bit flips, single-dimensional and multidimensional fault injection and simulated fault injection experiments were designed and conducted. The impact characteristics and sensitivities of electromagnetic faults, voltage faults, and temperature faults regarding the output sequences of random numbers were systematically evaluated. The experimental results show that this type of random number generator exhibits modular-level differential vulnerability under physical disturbances, especially in the data transmission processes of encryption paths and critical registers, which demonstrate higher sensitivity to flip-type faults. This research provides a feasible analysis framework and practical basis for the security assessment and fault-tolerant design of random number generators, possessing certain engineering applicability and theoretical reference value. Full article
Show Figures

Figure 1

18 pages, 456 KB  
Article
Machine Learning-Powered IDS for Gray Hole Attack Detection in VANETs
by Juan Antonio Arízaga-Silva, Alejandro Medina Santiago, Mario Espinosa-Tlaxcaltecatl and Carlos Muñiz-Montero
World Electr. Veh. J. 2025, 16(9), 526; https://doi.org/10.3390/wevj16090526 - 18 Sep 2025
Viewed by 346
Abstract
Vehicular Ad Hoc Networks (VANETs) enable critical communication for Intelligent Transportation Systems (ITS) but are vulnerable to cybersecurity threats, such as Gray Hole attacks, where malicious nodes selectively drop packets, compromising network integrity. Traditional detection methods struggle with the intermittent nature of these [...] Read more.
Vehicular Ad Hoc Networks (VANETs) enable critical communication for Intelligent Transportation Systems (ITS) but are vulnerable to cybersecurity threats, such as Gray Hole attacks, where malicious nodes selectively drop packets, compromising network integrity. Traditional detection methods struggle with the intermittent nature of these attacks, necessitating advanced solutions. This study proposes a machine learning-based Intrusion Detection System (IDS) to detect Gray Hole attacks in VANETs. Methods: This study proposes a machine learning-based Intrusion Detection System (IDS) to detect Gray Hole attacks in VANETs. Features were extracted from network traffic simulations on NS-3 and categorized into time-, packet-, and protocol-based attributes, where NS-3 is defined as a discrete event network simulator widely used in communication protocol research. Multiple classifiers, including Random Forest, Support Vector Machine (SVM), Logistic Regression, and Naive Bayes, were evaluated using precision, recall, and F1-score metrics. The Random Forest classifier outperformed others, achieving an F1-score of 0.9927 with 15 estimators and a depth of 15. In contrast, SVM variants exhibited limitations due to overfitting, with precision and recall below 0.76. Feature analysis highlighted transmission rate and packet/byte counts as the most influential for detection. The Random Forest-based IDS effectively identifies Gray Hole attacks, offering high accuracy and robustness. This approach addresses a critical gap in VANET security, enhancing resilience against sophisticated threats. Future work could explore hybrid models or real-world deployment to further validate the system’s efficacy. Full article
Show Figures

Figure 1

19 pages, 11534 KB  
Article
Segment and Recover: Defending Object Detectors Against Adversarial Patch Attacks
by Haotian Gu and Hamidreza Jafarnejadsani
J. Imaging 2025, 11(9), 316; https://doi.org/10.3390/jimaging11090316 - 15 Sep 2025
Viewed by 399
Abstract
Object detection is used to automatically identify and locate specific objects within images or videos for applications like autonomous driving, security surveillance, and medical imaging. Protecting object detection models against adversarial attacks, particularly malicious patches, is crucial to ensure reliable and safe performance [...] Read more.
Object detection is used to automatically identify and locate specific objects within images or videos for applications like autonomous driving, security surveillance, and medical imaging. Protecting object detection models against adversarial attacks, particularly malicious patches, is crucial to ensure reliable and safe performance in safety-critical applications, where misdetections can lead to severe consequences. Existing defenses against patch attacks are primarily designed for stationary scenes and struggle against adversarial image patches that vary in scale, position, and orientation in dynamic environments.In this paper, we introduce SAR, a patch-agnostic defense scheme based on image preprocessing that does not require additional model training. By integration of the patch-agnostic detection frontend with an additional broken pixel restoration backend, Segment and Recover (SAR) is developed for the large-mask-covered object-hiding attack. Our approach breaks the limitation of the patch scale, shape, and location, accurately localizes the adversarial patch on the frontend, and restores the broken pixel on the backend. Our evaluations of the clean performance demonstrate that SAR is compatible with a variety of pretrained object detectors. Moreover, SAR exhibits notable resilience improvements over state-of-the-art methods evaluated in this paper. Our comprehensive evaluation studies involve diverse patch types, such as localized-noise, printable, visible, and adaptive adversarial patches. Full article
(This article belongs to the Special Issue Object Detection in Video Surveillance Systems)
Show Figures

Figure 1

49 pages, 3209 KB  
Article
SAFE-MED for Privacy-Preserving Federated Learning in IoMT via Adversarial Neural Cryptography
by Mohammad Zubair Khan, Waseem Abbass, Nasim Abbas, Muhammad Awais Javed, Abdulrahman Alahmadi and Uzma Majeed
Mathematics 2025, 13(18), 2954; https://doi.org/10.3390/math13182954 - 12 Sep 2025
Viewed by 780
Abstract
Federated learning (FL) offers a promising paradigm for distributed model training in Internet of Medical Things (IoMT) systems, where patient data privacy and device heterogeneity are critical concerns. However, conventional FL remains vulnerable to gradient leakage, model poisoning, and adversarial inference, particularly in [...] Read more.
Federated learning (FL) offers a promising paradigm for distributed model training in Internet of Medical Things (IoMT) systems, where patient data privacy and device heterogeneity are critical concerns. However, conventional FL remains vulnerable to gradient leakage, model poisoning, and adversarial inference, particularly in privacy-sensitive and resource-constrained medical environments. To address these challenges, we propose SAFE-MED, a secure and adversarially robust framework for privacy-preserving FL tailored for IoMT deployments. SAFE-MED integrates neural encryption, adversarial co-training, anomaly-aware gradient filtering, and trust-weighted aggregation into a unified learning pipeline. The encryption and decryption components are jointly optimized with a simulated adversary under a minimax objective, ensuring high reconstruction fidelity while suppressing inference risk. To enhance robustness, the system dynamically adjusts client influence based on behavioral trust metrics and detects malicious updates using entropy-based anomaly scores. Comprehensive experiments are conducted on three representative medical datasets: Cleveland Heart Disease (tabular), MIT-BIH Arrhythmia (ECG time series), and PhysioNet Respiratory Signals. SAFE-MED achieves near-baseline accuracy with less than 2% degradation, while reducing gradient leakage by up to 85% compared to vanilla FedAvg and over 66% compared to recent neural cryptographic FL baselines. The framework maintains over 90% model accuracy under 20% poisoning attacks and reduces communication cost by 42% relative to homomorphic encryption-based methods. SAFE-MED demonstrates strong scalability, reliable convergence, and practical runtime efficiency across heterogeneous network conditions. These findings validate its potential as a secure, efficient, and deployable FL solution for next-generation medical AI applications. Full article
Show Figures

Figure 1

26 pages, 4880 KB  
Article
Cell-Sequence-Based Covert Signal for Tor De-Anonymization Attacks
by Ran Xin, Yapeng Wang, Xiaohong Huang, Xu Yang and Sio Kei Im
Future Internet 2025, 17(9), 403; https://doi.org/10.3390/fi17090403 - 4 Sep 2025
Viewed by 628
Abstract
This research introduces a novel de-anonymization technique targeting the Tor network, addressing limitations in prior attack models, particularly concerning router positioning following the introduction of bridge relays. Our method exploits two specific, inherent protocol-level vulnerabilities: the absence of a continuity check for circuit-level [...] Read more.
This research introduces a novel de-anonymization technique targeting the Tor network, addressing limitations in prior attack models, particularly concerning router positioning following the introduction of bridge relays. Our method exploits two specific, inherent protocol-level vulnerabilities: the absence of a continuity check for circuit-level cells and anomalous residual values in RELAY_EARLY cell counters, working by manipulating cell headers to embed a covert signal. This signal is composed of reserved fields, start and end delimiters, and a payload that encodes target identifiers. Using this signal, malicious routers can effectively mark data flows for later identification. These routers employ a finite state machine (FSM) to adaptively switch between signal injection and detection. Experimental evaluations, conducted within a controlled environment using attacker-controlled onion routers, demonstrated that the embedded signals are undetectable by standard Tor routers, cause no noticeable performance degradation, and allow reliable correlation of Tor users with public services and deanonymization of hidden service IP addresses. This work reveals a fundamental design trade-off in Tor: the decision to conceal circuit length inadvertently exposes cell transmission characteristics. This creates a bidirectional vector for stealthy, protocol-level de-anonymization attacks, even though Tor payloads remain encrypted. Full article
Show Figures

Figure 1

23 pages, 360 KB  
Article
In-Memory Shellcode Runner Detection in Internet of Things (IoT) Networks: A Lightweight Behavioral and Semantic Analysis Framework
by Jean Rosemond Dora, Ladislav Hluchý and Michal Staňo
Sensors 2025, 25(17), 5425; https://doi.org/10.3390/s25175425 - 2 Sep 2025
Viewed by 488
Abstract
The widespread expansion of Internet of Things devices has ushered in an era of unprecedented connectivity. However, it has simultaneously exposed these resource-constrained systems to novel and advanced cyber threats. Among the most impressive and complex attacks are those leveraging in-memory shellcode runners [...] Read more.
The widespread expansion of Internet of Things devices has ushered in an era of unprecedented connectivity. However, it has simultaneously exposed these resource-constrained systems to novel and advanced cyber threats. Among the most impressive and complex attacks are those leveraging in-memory shellcode runners (malware), which perform malicious payloads directly in memory, circumventing conventional disk-based detection security mechanisms. This paper presents a comprehensive framework, both academic and technical, for detecting in-memory shellcode runners, particularly tailored to the unique characteristics of these networks. We analyze and review the limitations of existing security parameters in this area, highlight the different challenges posed by those constraints, and propose a multi-layered approach that combines entropy-based anomaly scoring, lightweight behavioral monitoring, and novel Graph Neural Network methods for System Call Semantic Graph Analysis. Our proposal focuses on runtime analysis of process memory, system call patterns (e.g., Syscall ID, Process ID, Hooking, Win32 application programming interface), and network behavior to identify the subtle indicators of compromise that portray in-memory attacks, even in the absence of conventional file-system artifacts. Through meticulous empirical evaluation against simulated and real-world Internet of Things attacks (red team engagements, penetration testing), we demonstrate the efficiency and a few challenges of our approach, providing a crucial step towards enhancing the security posture of these critical environments. Full article
(This article belongs to the Special Issue Internet of Things Cybersecurity)
Show Figures

Figure 1

32 pages, 2361 KB  
Article
Exploring the Use and Misuse of Large Language Models
by Hezekiah Paul D. Valdez, Faranak Abri, Jade Webb and Thomas H. Austin
Information 2025, 16(9), 758; https://doi.org/10.3390/info16090758 - 1 Sep 2025
Viewed by 596
Abstract
Language modeling has evolved from simple rule-based systems into complex assistants capable of tackling a multitude of tasks. State-of-the-art large language models (LLMs) are capable of scoring highly on proficiency benchmarks, and as a result have been deployed across industries to increase productivity [...] Read more.
Language modeling has evolved from simple rule-based systems into complex assistants capable of tackling a multitude of tasks. State-of-the-art large language models (LLMs) are capable of scoring highly on proficiency benchmarks, and as a result have been deployed across industries to increase productivity and convenience. However, the prolific nature of such tools has provided threat actors with the ability to leverage them for attack development. Our paper describes the current state of LLMs, their availability, and their role in benevolent and malicious applications. In addition, we propose how an LLM can be combined with text-to-speech (TTS) voice cloning to create a framework capable of carrying out social engineering attacks. Our case study analyzes the realism of two different open-source TTS models, Tortoise TTS and Coqui XTTS-v2, by calculating similarity scores between generated and real audio samples from four participants. Our results demonstrate that Tortoise is able to generate realistic voice clone audios for native English speaking males, which indicates that easily accessible resources can be leveraged to create deceptive social engineering attacks. As such tools become more advanced, defenses such as awareness, detection, and red teaming may not be able to keep up with dangerously equipped adversaries. Full article
Show Figures

Figure 1

27 pages, 5936 KB  
Article
Elasticsearch-Based Threat Hunting to Detect Privilege Escalation Using Registry Modification and Process Injection Attacks
by Akashdeep Bhardwaj, Luxmi Sapra and Shawon Rahman
Future Internet 2025, 17(9), 394; https://doi.org/10.3390/fi17090394 - 29 Aug 2025
Viewed by 632
Abstract
Malicious actors often exploit persistence mechanisms, such as unauthorized modifications to Windows startup directories or registry keys, to achieve privilege escalation and maintain access on compromised systems. While information technology (IT) teams legitimately use these AutoStart Extension Points (ASEPs), adversaries frequently deploy malicious [...] Read more.
Malicious actors often exploit persistence mechanisms, such as unauthorized modifications to Windows startup directories or registry keys, to achieve privilege escalation and maintain access on compromised systems. While information technology (IT) teams legitimately use these AutoStart Extension Points (ASEPs), adversaries frequently deploy malicious binaries with non-standard naming conventions or execute files from transient directories (e.g., Temp or Public folders). This study proposes a threat-hunting framework using a custom Elasticsearch Security Information and Event Management (SIEM) system to detect such persistence tactics. Two hypothesis-driven investigations were conducted: the first focused on identifying unauthorized ASEP registry key modifications during user logon events, while the second targeted malicious Dynamic Link Library (DLL) injections within temporary directories. By correlating Sysmon event logs (e.g., registry key creation/modification and process creation events), the researchers identified attack chains involving sequential registry edits and malicious file executions. Analysis confirmed that Sysmon Event ID 12 (registry object creation) and Event ID 7 (DLL loading) provided critical forensic evidence for detecting these tactics. The findings underscore the efficacy of real-time event correlation in SIEM systems in disrupting adversarial workflows, enabling rapid mitigation through the removal of malicious entries. This approach advances proactive defense strategies against privilege escalation and persistence, emphasizing the need for granular monitoring of registry and filesystem activities in enterprise environments. Full article
(This article belongs to the Special Issue Security of Computer System and Network)
Show Figures

Figure 1

19 pages, 3864 KB  
Article
DyP-CNX: A Dynamic Preprocessing-Enhanced Hybrid Model for Network Intrusion Detection
by Mingshan Xia, Li Wang, Yakang Li, Jiahong Xu and Fazhi Qi
Appl. Sci. 2025, 15(17), 9431; https://doi.org/10.3390/app15179431 - 28 Aug 2025
Viewed by 396
Abstract
With the continuous growth of network threats, intrusion detection systems need to have robustness and adaptability to effectively identify malicious behaviors. However, factors such as noise interference, class imbalance, and complex attack pattern recognition have posed significant challenges to traditional systems. To address [...] Read more.
With the continuous growth of network threats, intrusion detection systems need to have robustness and adaptability to effectively identify malicious behaviors. However, factors such as noise interference, class imbalance, and complex attack pattern recognition have posed significant challenges to traditional systems. To address these issues, this paper proposes a dynamic preprocessing-enhanced DyP-CNX framework. The framework designs a sliding window dynamic interquartile range (IQR) standardization mechanism to effectively suppress the temporal non-stationarity interference of network traffic. It also combines a random undersampling strategy to mitigate the class imbalance problem. The model architecture adopts a CNN-XGBoost collaborative learning framework, combining a dual-channel convolutional neural network (CNN) and two-stage extreme gradient boosting (XGBoost) to integrate the original statistical features and deep semantic features. On the UNSW-NB15 and CSE-CIC-IDS2018 datasets, the method achieved F1 values of 91.57% and 99.34%, respectively. The experimental results show that the DyP-CNX method has the potential to handle the feature drift and pattern confusion problems in complex network environments, providing a new technical solution for adaptive intrusion detection systems. Full article
(This article belongs to the Special Issue Machine Learning and Its Application for Anomaly Detection)
Show Figures

Figure 1

20 pages, 2631 KB  
Article
Machine Learning Models for SQL Injection Detection
by Cosmina-Mihaela Rosca, Adrian Stancu and Catalin Popescu
Electronics 2025, 14(17), 3420; https://doi.org/10.3390/electronics14173420 - 27 Aug 2025
Viewed by 874
Abstract
Cyberattacks include Structured Query Language Injection (SQLi), which represents threats at the level of web applications that interact with the database. These attacks are carried out by executing SQL commands, which compromise the integrity and confidentiality of the data. In this paper, a [...] Read more.
Cyberattacks include Structured Query Language Injection (SQLi), which represents threats at the level of web applications that interact with the database. These attacks are carried out by executing SQL commands, which compromise the integrity and confidentiality of the data. In this paper, a machine learning (ML)-based model is proposed for identifying SQLi attacks. The authors propose a two-stage personalized software processing pipeline as a novel element. Although individual techniques are known, their structured combination and application in this context represent a novel approach to transforming raw SQL queries into input features for an ML model. In this research, a dataset consisting of 90,000 SQL queries was constructed, comprising 17,695 legitimate and 72,304 malicious queries. The dataset consists of synthetic data generated using the GPT-4o model and data from a publicly available dataset. These were processed within a pipeline proposed by the authors, consisting of two stages: syntactic normalization and the extraction of the eight semantic features for model training. Also, within the research, several ML models were analyzed using the Azure Machine Learning Studio platform. These models were paired with different sampling algorithms for selecting the training set and the validation set. Out of the 15 training-sampling algorithm combinations, the Voting Ensemble model achieved the best performance. It achieved an accuracy of 96.86%, a weighted AUC of 98.25%, a weighted F1-score of 96.77%, a weighted precision of 96.92%, and a Matthews correlation coefficient of 89.89%. These values demonstrate the model’s ability to classify queries as legitimate or malicious. The attack identification rate was only 15 malicious queries missed out of a total of 7200, and the number of false alarms was 211 cases. The results confirm the possibility of integrating this algorithm into an additional security layer within an existing web application architecture. In practice, the authors suggest adding an extra layer of security using synthetic data. Full article
(This article belongs to the Special Issue Machine Learning and Cybersecurity—Trends and Future Challenges)
Show Figures

Figure 1

19 pages, 2394 KB  
Article
A Decoupled Contrastive Learning Framework for Backdoor Defense in Federated Learning
by Jiahao Cheng, Tingrui Zhang, Meijiao Li, Wenbin Wang, Jun Wang and Ying Zhang
Symmetry 2025, 17(9), 1398; https://doi.org/10.3390/sym17091398 - 27 Aug 2025
Viewed by 709
Abstract
Federated learning (FL) enables collaborative model training across distributed clients while preserving data privacy by sharing only local parameters. However, this decentralized setup, while preserving data privacy, also introduces new vulnerabilities, particularly to backdoor attacks, in which compromised clients inject poisoned data or [...] Read more.
Federated learning (FL) enables collaborative model training across distributed clients while preserving data privacy by sharing only local parameters. However, this decentralized setup, while preserving data privacy, also introduces new vulnerabilities, particularly to backdoor attacks, in which compromised clients inject poisoned data or gradients to manipulate the global model. Existing defenses rely on the global server to inspect model parameters, while mitigating backdoor effects locally remains underexplored. To address this, we propose a decoupled contrastive learning–based defense. We first train a backdoor model using poisoned data, then extract intermediate features from both the local and backdoor models, and apply a contrastive objective to reduce their similarity, encouraging the local model to focus on clean patterns and suppress backdoor behaviors. Crucially, we leverage an implicit symmetry between clean and poisoned representations—structurally similar but semantically different. Disrupting this symmetry helps disentangle benign and malicious components. Our approach requires no prior attack knowledge or clean validation data, making it suitable for practical FL deployments. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

Back to TopTop