Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,230)

Search Parameters:
Keywords = privacy attack

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 2222 KB  
Article
Machine Learning-Driven Security and Privacy Analysis of a Dummy-ABAC Model for Cloud Computing
by Baby Marina, Irfana Memon, Fizza Abbas Alvi, Ubaidullah Rajput and Mairaj Nabi
Computers 2025, 14(10), 420; https://doi.org/10.3390/computers14100420 - 2 Oct 2025
Abstract
The Attribute-Based Access Control (ABAC) model provides access control decisions based on subject, object (resource), and contextual attributes. However, the use of sensitive attributes in access control decisions poses many security and privacy challenges, particularly in cloud environment where third parties are involved. [...] Read more.
The Attribute-Based Access Control (ABAC) model provides access control decisions based on subject, object (resource), and contextual attributes. However, the use of sensitive attributes in access control decisions poses many security and privacy challenges, particularly in cloud environment where third parties are involved. To address this shortcoming, we present a novel privacy-preserving Dummy-ABAC model that obfuscates real attributes with dummy attributes before transmission to the cloud server. In the proposed model, only dummy attributes are stored in the cloud database, whereas real attributes and mapping tokens are stored in a local machine database. Only dummy attributes are used for the access request evaluation in the cloud, and real data are retrieved in the post-decision mechanism using secure tokens. The security of the proposed model was assessed using a simulated threat scenario, including attribute inference, policy injection, and reverse mapping attacks. Experimental evaluation using machine learning classifiers (“DecisionTree” DT, “RandomForest” RF), demonstrated that inference accuracy dropped from ~0.65 on real attributes to ~0.25 on dummy attributes confirming improved resistance to inference attacks. Furthermore, the model rejects malformed and unauthorized policies. Performance analysis of dummy generation, token generation, encoding, and nearest-neighbor search, demonstrated minimal latency in both local and cloud environments. Overall, the proposed model ensures an efficient, secure, and privacy-preserving access control in cloud environments. Full article
Show Figures

Figure 1

23 pages, 1735 KB  
Article
FortiNIDS: Defending Smart City IoT Infrastructures Against Transferable Adversarial Poisoning in Machine Learning-Based Intrusion Detection Systems
by Abdulaziz Alajaji
Sensors 2025, 25(19), 6056; https://doi.org/10.3390/s25196056 - 2 Oct 2025
Abstract
In today’s digital era, cyberattacks are rapidly evolving, rendering traditional security mechanisms increasingly inadequate. The adoption of AI-based Network Intrusion Detection Systems (NIDS) has emerged as a promising solution, due to their ability to detect and respond to malicious activity using machine learning [...] Read more.
In today’s digital era, cyberattacks are rapidly evolving, rendering traditional security mechanisms increasingly inadequate. The adoption of AI-based Network Intrusion Detection Systems (NIDS) has emerged as a promising solution, due to their ability to detect and respond to malicious activity using machine learning techniques. However, these systems remain vulnerable to adversarial threats, particularly data poisoning attacks, in which attackers manipulate training data to degrade model performance. In this work, we examine tree classifiers, Random Forest and Gradient Boosting, to model black box poisoning attacks. We introduce FortiNIDS, a robust framework that employs a surrogate neural network to generate adversarial perturbations that can transfer between models, leveraging the transferability of adversarial examples. In addition, we investigate defense strategies designed to improve the resilience of NIDS in smart city Internet of Things (IoT) settings. Specifically, we evaluate adversarial training and the Reject on Negative Impact (RONI) technique using the widely adopted CICDDoS2019 dataset. Our findings highlight the effectiveness of targeted defenses in improving detection accuracy and maintaining system reliability under adversarial conditions, thereby contributing to the security and privacy of smart city networks. Full article
Show Figures

Figure 1

18 pages, 1699 KB  
Article
A Comparative Analysis of Defense Mechanisms Against Model Inversion Attacks on Tabular Data
by Neethu Vijayan, Raj Gururajan and Ka Ching Chan
J. Cybersecur. Priv. 2025, 5(4), 80; https://doi.org/10.3390/jcp5040080 - 2 Oct 2025
Abstract
As more machine learning models are used in sensitive fields like healthcare, finance, and smart infrastructure, protecting structured tabular data from privacy attacks is a key research challenge. Although several privacy-preserving methods have been proposed for tabular data, a comprehensive comparison of their [...] Read more.
As more machine learning models are used in sensitive fields like healthcare, finance, and smart infrastructure, protecting structured tabular data from privacy attacks is a key research challenge. Although several privacy-preserving methods have been proposed for tabular data, a comprehensive comparison of their performance and trade-offs has yet to be conducted. We introduce and empirically assess a combined defense system that integrates differential privacy, federated learning, adaptive noise injection, hybrid cryptographic encryption, and ensemble-based obfuscation. The given strategies are analyzed on the benchmark tabular datasets (ADULT, GSS, FTE), showing that the suggested methods can mitigate up to 50 percent of model inversion attacks in relation to baseline models without decreasing the model utility (F1 scores are higher than 0.85). Moreover, on these datasets, our results match or exceed the latest state-of-the-art (SOTA) in terms of privacy. We also transform each defense into essential data privacy laws worldwide (GDPR and HIPAA), suggesting the best applicable guidelines for the ethical and regulation-sensitive deployment of privacy-preserving machine learning models in sensitive spaces. Full article
(This article belongs to the Section Privacy)
Show Figures

Figure 1

51 pages, 958 KB  
Systematic Review
AI-Enhanced Intrusion Detection for UAV Systems: A Taxonomy and Comparative Review
by MD Sakibul Islam, Ashraf Sharif Mahmoud and Tarek Rahil Sheltami
Drones 2025, 9(10), 682; https://doi.org/10.3390/drones9100682 - 1 Oct 2025
Abstract
The diverse usage of Unmanned Aerial Vehicles (UAVs) across commercial, military, and civil domains has significantly heightened the need for robust cybersecurity mechanisms. Given their reliance on wireless communications, real-time control systems, and sensor integration, UAVs are highly susceptible to cyber intrusions that [...] Read more.
The diverse usage of Unmanned Aerial Vehicles (UAVs) across commercial, military, and civil domains has significantly heightened the need for robust cybersecurity mechanisms. Given their reliance on wireless communications, real-time control systems, and sensor integration, UAVs are highly susceptible to cyber intrusions that can disrupt missions, compromise data integrity, or cause physical harm. This paper presents a comprehensive literature review of Intrusion Detection Systems (IDSs) that leverage artificial intelligence (AI) to enhance the security of UAV and UAV swarm environments. Through rigorous analysis of recent peer-reviewed publications, we have examined the studies in terms of AI model algorithm, dataset origin, deployment mode: centralized, distributed or federated. The classification also includes the detection strategy: online versus offline. Results show a dominant preference for centralized, supervised learning using standard datasets such as CICIDS2017, NSL-KDD, and KDDCup99, limiting applicability to real UAV operations. Deep learning (DL) methods, particularly Convolutional Neural Networks (CNNs), Long Short-term Memory (LSTM), and Autoencoders (AEs), demonstrate strong detection accuracy, but often under ideal conditions, lacking resilience to zero-day attacks and real-time constraints. Notably, emerging trends point to lightweight IDS models and federated learning frameworks for scalable, privacy-preserving solutions in UAV swarms. This review underscores key research gaps, including the scarcity of real UAV datasets, the absence of standardized benchmarks, and minimal exploration of lightweight detection schemes, offering a foundation for advancing secure UAV systems. Full article
20 pages, 5435 KB  
Article
Do LLMs Offer a Robust Defense Mechanism Against Membership Inference Attacks on Graph Neural Networks?
by Abdellah Jnaini and Mohammed-Amine Koulali
Computers 2025, 14(10), 414; https://doi.org/10.3390/computers14100414 - 1 Oct 2025
Abstract
Graph neural networks (GNNs) are deep learning models that process structured graph data. By leveraging their graphs/node classification and link prediction capabilities, they have been effectively applied in multiple domains such as community detection, location sharing services, and drug discovery. These powerful applications [...] Read more.
Graph neural networks (GNNs) are deep learning models that process structured graph data. By leveraging their graphs/node classification and link prediction capabilities, they have been effectively applied in multiple domains such as community detection, location sharing services, and drug discovery. These powerful applications and the vast availability of graphs in diverse fields have facilitated the adoption of GNNs in privacy-sensitive contexts (e.g., banking systems and healthcare). Unfortunately, GNNs are vulnerable to the leakage of sensitive information through well-defined attacks. Our main focus is on membership inference attacks (MIAs) that allow the attacker to infer whether a given sample belongs to the training dataset. To prevent this, we introduce three LLM-guided defense mechanisms applied at the posterior level: posterior encoding with noise, knowledge distillation, and secure aggregation. Our proposed approaches not only successfully reduce MIA accuracy but also maintain the model’s performance on the node classification task. Our findings, validated through extensive experiments on widely used GNN architectures, offer insights into balancing privacy preservation with predictive performance. Full article
Show Figures

Figure 1

39 pages, 5203 KB  
Technical Note
EMR-Chain: Decentralized Electronic Medical Record Exchange System
by Ching-Hsi Tseng, Yu-Heng Hsieh, Heng-Yi Lin and Shyan-Ming Yuan
Technologies 2025, 13(10), 446; https://doi.org/10.3390/technologies13100446 - 1 Oct 2025
Abstract
Current systems for exchanging medical records struggle with efficiency and privacy issues. While establishing the Electronic Medical Record Exchange Center (EEC) in 2012 was intended to alleviate these issues, its centralized structure has brought about new attack vectors, such as performance bottlenecks, single [...] Read more.
Current systems for exchanging medical records struggle with efficiency and privacy issues. While establishing the Electronic Medical Record Exchange Center (EEC) in 2012 was intended to alleviate these issues, its centralized structure has brought about new attack vectors, such as performance bottlenecks, single points of failure, and an absence of patient consent over their data. Methods: This paper describes a novel EMR Gateway system that uses blockchain technology to exchange electronic medical records electronically, overcome the limitations of current centralized systems for sharing EMR, and leverage decentralization to enhance resilience, data privacy, and patient autonomy. Our proposed system is built on two interconnected blockchains: a Decentralized Identity Blockchain (DID-Chain) based on Ethereum for managing user identities via smart contracts, and an Electronic Medical Record Blockchain (EMR-Chain) implemented on Hyperledger Fabric to handle medical record indexes and fine-grained access control. To address the dual requirements of cross-platform data exchange and patient privacy, the system was developed based on the Fast Healthcare Interoperability Resources (FHIR) standard, incorporating stringent de-identification protocols. Our system is built using the FHIR standard. Think of it as a common language that lets different healthcare systems talk to each other without confusion. Plus, we are very serious about patient privacy and remove all personal details from the data to keep it confidential. When we tested its performance, the system handled things well. It can take in about 40 transactions every second and pull out data faster, at around 49 per second. To give you some perspective, this is far more than what the average hospital in Taiwan dealt with back in 2018. This shows our system is very solid and more than ready to handle even bigger workloads in the future. Full article
Show Figures

Figure 1

18 pages, 654 KB  
Article
Trustworthy Face Recognition as a Service: A Multi-Layered Approach for Mitigating Spoofing and Ensuring System Integrity
by Mostafa Kira, Zeyad Alajamy, Ahmed Soliman, Yusuf Mesbah and Manuel Mazzara
Future Internet 2025, 17(10), 450; https://doi.org/10.3390/fi17100450 - 30 Sep 2025
Abstract
Facial recognition systems are increasingly used for authentication across domains such as finance, e-commerce, and public services, but their growing adoption raises significant concerns about spoofing attacks enabled by printed photos, replayed videos, or AI-generated deepfakes. To address this gap, we introduce a [...] Read more.
Facial recognition systems are increasingly used for authentication across domains such as finance, e-commerce, and public services, but their growing adoption raises significant concerns about spoofing attacks enabled by printed photos, replayed videos, or AI-generated deepfakes. To address this gap, we introduce a multi-layered Face Recognition-as-a-Service (FRaaS) platform that integrates passive liveness detection with active challenge–response mechanisms, thereby defending against both low-effort and sophisticated presentation attacks. The platform is designed as a scalable cloud-based solution, complemented by an open-source SDK for seamless third-party integration, and guided by ethical AI principles of fairness, transparency, and privacy. A comprehensive evaluation validates the system’s logic and implementation: (i) Frontend audits using Lighthouse consistently scored above 96% in performance, accessibility, and best practices; (ii) SDK testing achieved over 91% code coverage with reliable OAuth flow and error resilience; (iii) Passive liveness layer employed the DeepPixBiS model, which achieves an Average Classification Error Rate (ACER) of 0.4 on the OULU–NPU benchmark, outperforming prior state-of-the-art methods; and (iv) Load simulations confirmed high throughput (276 req/s), low latency (95th percentile at 1.51 ms), and zero error rates. Together, these results demonstrate that the proposed platform is robust, scalable, and trustworthy for security-critical applications. Full article
Show Figures

Figure 1

22 pages, 3582 KB  
Article
Novel Synthetic Dataset Generation Method with Privacy-Preserving for Intrusion Detection System
by JaeCheol Kim, Seungun Park, Jaesik Cha, Eunyeong Son and Yunsik Son
Appl. Sci. 2025, 15(19), 10609; https://doi.org/10.3390/app151910609 - 30 Sep 2025
Abstract
The expansion of Internet of Things (IoT) networks has enabled real-time data collection and automation across smart cities, healthcare, and agriculture, delivering greater convenience and efficiency; however, exposure to diverse threats has also increased. Machine learning-based Intrusion Detection Systems (IDSs) provide an effective [...] Read more.
The expansion of Internet of Things (IoT) networks has enabled real-time data collection and automation across smart cities, healthcare, and agriculture, delivering greater convenience and efficiency; however, exposure to diverse threats has also increased. Machine learning-based Intrusion Detection Systems (IDSs) provide an effective means of defense, yet they require large volumes of data, and the use of raw IoT network data containing sensitive information introduces new privacy risks. This study proposes a novel privacy-preserving synthetic data generation model based on a tabular diffusion framework that incorporates Differential Privacy (DP). Among the three diffusion models (TabDDPM, TabSyn, and TabDiff), TabDiff with Utility-Preserving DP (UP-DP) achieved the best Synthetic Data Vault (SDV) Fidelity (0.98) and higher values on multiple statistical metrics, indicating improved utility. Furthermore, by employing the DisclosureProtection and attribute inference to infer and compare sensitive attributes on both real and synthetic datasets, we show that the proposed approach reduces privacy risk of the synthetic data. Additionally, a Membership Inference Attack (MIA) was also used for demonstration on models trained with both real and synthetic data. This approach decreases the risk of leaking patterns related to sensitive information, thereby enabling secure dataset sharing and analysis. Full article
Show Figures

Figure 1

36 pages, 2113 KB  
Article
Self-Sovereign Identities and Content Provenance: VeriTrust—A Blockchain-Based Framework for Fake News Detection
by Maruf Farhan, Usman Butt, Rejwan Bin Sulaiman and Mansour Alraja
Future Internet 2025, 17(10), 448; https://doi.org/10.3390/fi17100448 - 30 Sep 2025
Abstract
The widespread circulation of digital misinformation exposes a critical shortcoming in prevailing detection strategies, namely, the absence of robust mechanisms to confirm the origin and authenticity of online content. This study addresses this by introducing VeriTrust, a conceptual and provenance-centric framework designed to [...] Read more.
The widespread circulation of digital misinformation exposes a critical shortcoming in prevailing detection strategies, namely, the absence of robust mechanisms to confirm the origin and authenticity of online content. This study addresses this by introducing VeriTrust, a conceptual and provenance-centric framework designed to establish content-level trust by integrating Self-Sovereign Identity (SSI), blockchain-based anchoring, and AI-assisted decentralized verification. The proposed system is designed to operate through three key components: (1) issuing Decentralized Identifiers (DIDs) and Verifiable Credentials (VCs) through Hyperledger Aries and Indy; (2) anchoring cryptographic hashes of content metadata to an Ethereum-compatible blockchain using Merkle trees and smart contracts; and (3) enabling a community-led verification model enhanced by federated learning with future extensibility toward zero-knowledge proof techniques. Theoretical projections, derived from established performance benchmarks, suggest the framework offers low latency and high scalability for content anchoring and minimal on-chain transaction fees. It also prioritizes user privacy by ensuring no on-chain exposure of personal data. VeriTrust redefines misinformation mitigation by shifting from reactive content-based classification to proactive provenance-based verification, forming a verifiable link between digital content and its creator. VeriTrust, while currently at the conceptual and theoretical validation stage, holds promise for enhancing transparency, accountability, and resilience against misinformation attacks across journalism, academia, and online platforms. Full article
(This article belongs to the Special Issue AI and Blockchain: Synergies, Challenges, and Innovations)
Show Figures

Figure 1

32 pages, 13081 KB  
Article
FedIFD: Identifying False Data Injection Attacks in Internet of Vehicles Based on Federated Learning
by Huan Wang, Junying Yang, Jing Sun, Zhe Wang, Qingzheng Liu and Shaoxuan Luo
Big Data Cogn. Comput. 2025, 9(10), 246; https://doi.org/10.3390/bdcc9100246 - 26 Sep 2025
Abstract
With the rapid development of intelligent connected vehicle technology, false data injection (FDI) attacks have become a major challenge in the Internet of Vehicles (IoV). While deep learning methods can effectively identify such attacks, the dynamic, distributed architecture of the IoV and limited [...] Read more.
With the rapid development of intelligent connected vehicle technology, false data injection (FDI) attacks have become a major challenge in the Internet of Vehicles (IoV). While deep learning methods can effectively identify such attacks, the dynamic, distributed architecture of the IoV and limited computing resources hinder both privacy protection and lightweight computation. To address this, we propose FedIFD, a federated learning (FL)-based detection method for false data injection attacks. The lightweight threat detection model utilizes basic safety messages (BSM) for local incremental training, and the Q-FedCG algorithm compresses gradients for global aggregation. Original features are reshaped using a time window. To ensure temporal and spatial consistency, a sliding average strategy aligns samples before spatial feature extraction. A dual-branch architecture enables parallel extraction of spatiotemporal features: a three-layer stacked Bidirectional Long Short-Term Memory (BiLSTM) captures temporal dependencies, and a lightweight Transformer models spatial relationships. A dynamic feature fusion weight matrix calculates attention scores for adaptive feature weighting. Finally, a differentiated pooling strategy is applied to emphasize critical features. Experiments on the VeReMi dataset show that the accuracy reaches 97.8%. Full article
(This article belongs to the Special Issue Big Data Analytics with Machine Learning for Cyber Security)
Show Figures

Figure 1

20 pages, 2856 KB  
Article
Privacy-Preserving Federated Review Analytics with Data Quality Optimization for Heterogeneous IoT Platforms
by Jiantao Xu, Liu Jin and Chunhua Su
Electronics 2025, 14(19), 3816; https://doi.org/10.3390/electronics14193816 - 26 Sep 2025
Abstract
The proliferation of Internet of Things (IoT) devices has created a distributed ecosystem where users generate vast amounts of review data across heterogeneous platforms, from smart home assistants to connected vehicles. This data is crucial for service improvement but is plagued by fake [...] Read more.
The proliferation of Internet of Things (IoT) devices has created a distributed ecosystem where users generate vast amounts of review data across heterogeneous platforms, from smart home assistants to connected vehicles. This data is crucial for service improvement but is plagued by fake reviews, data quality inconsistencies, and significant privacy risks. Traditional centralized analytics fail in this landscape due to data privacy regulations and the sheer scale of distributed data. To address this, we propose FedDQ, a federated learning framework for Privacy-Preserving Federated Review Analytics with Data Quality Optimization. FedDQ introduces a multi-faceted data quality assessment module that operates locally on each IoT device, evaluating review data based on textual coherence, behavioral patterns, and cross-modal consistency without exposing raw data. These quality scores are then used to orchestrate a quality-aware aggregation mechanism at the server, prioritizing contributions from high-quality, reliable clients. Furthermore, our framework incorporates differential privacy and models system heterogeneity to ensure robustness and practical applicability in resource-constrained IoT environments. Extensive experiments on multiple real-world datasets show that FedDQ significantly outperforms baseline federated learning methods in accuracy, convergence speed, and resilience to data poisoning attacks, achieving up to a 13.8% improvement in F1-score under highly heterogeneous and noisy conditions while preserving user privacy. Full article
(This article belongs to the Special Issue Emerging IoT Sensor Network Technologies and Applications)
Show Figures

Figure 1

77 pages, 8596 KB  
Review
Smart Grid Systems: Addressing Privacy Threats, Security Vulnerabilities, and Demand–Supply Balance (A Review)
by Iqra Nazir, Nermish Mushtaq and Waqas Amin
Energies 2025, 18(19), 5076; https://doi.org/10.3390/en18195076 - 24 Sep 2025
Viewed by 85
Abstract
The smart grid (SG) plays a seminal role in the modern energy landscape by integrating digital technologies, the Internet of Things (IoT), and Advanced Metering Infrastructure (AMI) to enable bidirectional energy flow, real-time monitoring, and enhanced operational efficiency. However, these advancements also introduce [...] Read more.
The smart grid (SG) plays a seminal role in the modern energy landscape by integrating digital technologies, the Internet of Things (IoT), and Advanced Metering Infrastructure (AMI) to enable bidirectional energy flow, real-time monitoring, and enhanced operational efficiency. However, these advancements also introduce critical challenges related to data privacy, cybersecurity, and operational balance. This review critically evaluates SG systems, beginning with an analysis of data privacy vulnerabilities, including Man-in-the-Middle (MITM), Denial-of-Service (DoS), and replay attacks, as well as insider threats, exemplified by incidents such as the 2023 Hydro-Québec cyberattack and the 2024 blackout in Spain. The review further details the SG architecture and its key components, including smart meters (SMs), control centers (CCs), aggregators, smart appliances, and renewable energy sources (RESs), while emphasizing essential security requirements such as confidentiality, integrity, availability, secure storage, and scalability. Various privacy preservation techniques are discussed, including cryptographic tools like Homomorphic Encryption, Zero-Knowledge Proofs, and Secure Multiparty Computation, anonymization and aggregation methods such as differential privacy and k-Anonymity, as well as blockchain-based approaches and machine learning solutions. Additionally, the review examines pricing models and their resolution strategies, Demand–Supply Balance Programs (DSBPs) utilizing optimization, game-theoretic, and AI-based approaches, and energy storage systems (ESSs) encompassing lead–acid, lithium-ion, sodium-sulfur, and sodium-ion batteries, highlighting their respective advantages and limitations. By synthesizing these findings, the review identifies existing research gaps and provides guidance for future studies aimed at advancing secure, efficient, and sustainable smart grid implementations. Full article
(This article belongs to the Special Issue Smart Grid and Energy Storage)
Show Figures

Figure 1

25 pages, 471 KB  
Article
Mitigating Membership Inference Attacks via Generative Denoising Mechanisms
by Zhijie Yang, Xiaolong Yan, Guoguang Chen and Xiaoli Tian
Mathematics 2025, 13(19), 3070; https://doi.org/10.3390/math13193070 - 24 Sep 2025
Viewed by 138
Abstract
Membership Inference Attacks (MIAs) pose a significant threat to privacy in modern machine learning systems, enabling adversaries to determine whether a specific data record was used during model training. Existing defense techniques often degrade model utility or rely on heuristic noise injection, which [...] Read more.
Membership Inference Attacks (MIAs) pose a significant threat to privacy in modern machine learning systems, enabling adversaries to determine whether a specific data record was used during model training. Existing defense techniques often degrade model utility or rely on heuristic noise injection, which fails to provide a robust, mathematically grounded defense. In this paper, we propose Diffusion-Driven Data Preprocessing (D3P), a novel privacy-preserving framework leveraging generative diffusion models to transform sensitive training data before learning, thereby reducing the susceptibility of trained models to MIAs. Our method integrates a mathematically rigorous denoising process into a privacy-oriented diffusion pipeline, which ensures that the reconstructed data maintains essential semantic features for model utility while obfuscating fine-grained patterns that MIAs exploit. We further introduce a privacy–utility optimization strategy grounded in formal probabilistic analysis, enabling adaptive control of the diffusion noise schedule to balance attack resilience and predictive performance. Experimental evaluations across multiple datasets and architectures demonstrate that D3P significantly reduces MIA success rates by up to 42.3% compared to state-of-the-art defenses, with a less than 2.5% loss in accuracy. This work provides a theoretically principled and empirically validated pathway for integrating diffusion-based generative mechanisms into privacy-preserving AI pipelines, which is particularly suitable for deployment in cloud-based and blockchain-enabled machine learning environments. Full article
Show Figures

Figure 1

22 pages, 858 KB  
Systematic Review
Network Data Flow Collection Methods for Cybersecurity: A Systematic Literature Review
by Alessandro Carvalho Coutinho and Luciano Vieira de Araújo
Computers 2025, 14(10), 407; https://doi.org/10.3390/computers14100407 - 24 Sep 2025
Viewed by 110
Abstract
Network flow collection has become a cornerstone of cyber defence, yet the literature still lacks a consolidated view of which technologies are effective across different environments and conditions. We conducted a systematic review of 362 publications indexed in six digital libraries between January [...] Read more.
Network flow collection has become a cornerstone of cyber defence, yet the literature still lacks a consolidated view of which technologies are effective across different environments and conditions. We conducted a systematic review of 362 publications indexed in six digital libraries between January 2019 and July 2025, of which 51 met PRISMA 2020 eligibility criteria. All extraction materials are archived on OSF. NetFlow derivatives appear in 62.7% of the studies, IPFIX in 45.1%, INT/P4 or OpenFlow mirroring in 17.6%, and sFlow in 9.8%, with totals exceeding 100% because several papers evaluate multiple protocols. In total, 17 of the 51 studies (33.3%) tested production links of at least 40 Gbps, while others remained in laboratory settings. Fewer than half reported packet-loss thresholds or privacy controls, and none adopted a shared benchmark suite. These findings highlight trade-offs between throughput, fidelity, computational cost, and privacy, as well as gaps in encrypted-traffic support and GDPR-compliant anonymisation. Most importantly, our synthesis demonstrates that flow-collection methods directly shape what can be detected: some exporters are effective for volumetric attacks such as DDoS, while others enable visibility into brute-force authentication, botnets, or IoT malware. In other words, the choice of telemetry technology determines which threats and anomalous behaviours remain visible or hidden to defenders. By mapping technologies, metrics, and gaps, this review provides a single reference point for researchers, engineers, and regulators facing the challenges of flow-aware cybersecurity. Full article
(This article belongs to the Section ICT Infrastructures for Cybersecurity)
Show Figures

Graphical abstract

34 pages, 7182 KB  
Article
AI-Driven Attack Detection and Cryptographic Privacy Protection for Cyber-Resilient Industrial Control Systems
by Archana Pallakonda, Kabilan Kaliyannan, Rahul Loganathan Sumathi, Rayappa David Amar Raj, Rama Muni Reddy Yanamala, Christian Napoli and Cristian Randieri
IoT 2025, 6(3), 56; https://doi.org/10.3390/iot6030056 - 22 Sep 2025
Viewed by 285
Abstract
Industrial control systems (ICS) are increasingly vulnerable to evolving cyber threats due to the convergence of operational and information technologies. This research presents a robust cybersecurity framework that integrates machine learning-based anomaly detection with advanced cryptographic techniques to protect ICS communication networks. Using [...] Read more.
Industrial control systems (ICS) are increasingly vulnerable to evolving cyber threats due to the convergence of operational and information technologies. This research presents a robust cybersecurity framework that integrates machine learning-based anomaly detection with advanced cryptographic techniques to protect ICS communication networks. Using the ICS-Flow dataset, we evaluate several ensemble models, with XGBoost achieving 99.92% accuracy in binary classification and Decision Tree attaining 99.81% accuracy in multi-class classification. Additionally, we implement an LSTM autoencoder for temporal anomaly detection and employ the ADWIN technique for real-time drift detection. To ensure data security, we apply AES-CBC with HMAC and AES-GCM with RSA encryption, which demonstrates resilience against brute-force, tampering, and cryptanalytic attacks. Security assessments, including entropy analysis and adversarial evaluations (IND-CPA and IND-CCA), confirm the robustness of the encryption schemes against passive and active threats. A hardware implementation on a PYNQ Zynq board shows the feasibility of real-time deployment, with a runtime of 0.11 s. The results demonstrate that the proposed framework enhances ICS security by combining AI-driven anomaly detection with RSA-based cryptography, offering a viable solution for protecting ICS networks from emerging cyber threats. Full article
Show Figures

Figure 1

Back to TopTop