Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (165)

Search Parameters:
Keywords = AI/ML security

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
37 pages, 3784 KB  
Review
A Review on the Detection of Plant Disease Using Machine Learning and Deep Learning Approaches
by Thandiwe Nyawose, Rito Clifford Maswanganyi and Philani Khumalo
J. Imaging 2025, 11(10), 326; https://doi.org/10.3390/jimaging11100326 - 23 Sep 2025
Viewed by 492
Abstract
The early and accurate detection of plant diseases is essential for ensuring food security, enhancing crop yields, and facilitating precision agriculture. Manual methods are labour-intensive and prone to error, especially under varying environmental conditions. Artificial intelligence (AI), particularly machine learning (ML) and deep [...] Read more.
The early and accurate detection of plant diseases is essential for ensuring food security, enhancing crop yields, and facilitating precision agriculture. Manual methods are labour-intensive and prone to error, especially under varying environmental conditions. Artificial intelligence (AI), particularly machine learning (ML) and deep learning (DL), has advanced automated disease identification through image classification. However, challenges persist, including limited generalisability, small and imbalanced datasets, and poor real-world performance. Unlike previous reviews, this paper critically evaluates model performance in both lab and real-time field conditions, emphasising robustness, generalisation, and suitability for edge deployment. It introduces recent architectures such as GreenViT, hybrid ViT–CNN models, and YOLO-based single- and two-stage detectors, comparing their accuracy, inference speed, and hardware efficiency. The review discusses multimodal and self-supervised learning techniques to enhance detection in complex environments, highlighting key limitations, including reliance on handcrafted features, overfitting, and sensitivity to environmental noise. Strengths and weaknesses of models across diverse datasets are analysed with a focus on real-time agricultural applicability. The paper concludes by identifying research gaps and outlining future directions, including the development of lightweight architectures, integration with Deep Convolutional Generative Adversarial Networks (DCGANs), and improved dataset diversity for real-world deployment in precision agriculture. Full article
(This article belongs to the Section Image and Video Processing)
Show Figures

Figure 1

26 pages, 2055 KB  
Review
Evapotranspiration Estimation in the Arab Region: Methodological Advances and Multi-Sensor Integration Framework
by Shamseddin M. Ahmed, Khalid G. Biro Turk, Adam E. Ahmed, Azharia A. Elbushra, Anwar A. Aldhafeeri and Hossam M. Darrag
Water 2025, 17(18), 2702; https://doi.org/10.3390/w17182702 - 12 Sep 2025
Viewed by 434
Abstract
Evapotranspiration (ET) estimation is crucial for sustainable water resource management in arid and semi-arid regions, particularly in the Arab world, where water scarcity remains a significant challenge. The objectives of this study were to map dominant ET estimation techniques and their geographic distribution, [...] Read more.
Evapotranspiration (ET) estimation is crucial for sustainable water resource management in arid and semi-arid regions, particularly in the Arab world, where water scarcity remains a significant challenge. The objectives of this study were to map dominant ET estimation techniques and their geographic distribution, demonstrate fusion-based ET estimation under data-scarce conditions, and examine their alignment with climate change and food security priorities. The study reviewed 1279 ET-related articles indexed in the Web of Science, highlighting methodological trends, regional disparities, and the emergence of data-driven techniques. The results showed that traditional methods—primarily the Penman-Monteith model—dominate nearly 70% of the literature. In contrast, machine learning (ML), remote sensing (RS), and artificial intelligence (AI) collectively account for approximately 30%, with hybrid fusion frameworks appearing in only 2% of studies. ML applications are concentrated in Morocco, Egypt, and Iraq, while 50% of Arab countries lack any ML or AI-based research on energy transition (ET). Complementing the bibliometric analysis, this study demonstrates the practical potential of ML-based ET fusion using Landsat and the FAO Water Productivity (WaPOR) data within Saudi Arabia. A random forest model outperformed traditional averaging, reducing the mean absolute error (MAE) to 215.08 mm/year and the root mean square error (RMSE) to 531.34 mm/year, with a Pearson correlation coefficient of 0.86. The findings advocate for greater support and regional collaboration to advance ET monitoring and integrate ML-based modelling into climate resilience frameworks. Full article
(This article belongs to the Special Issue Applied Remote Sensing in Irrigated Agriculture)
Show Figures

Figure 1

42 pages, 5040 KB  
Systematic Review
A Systematic Review of Machine Learning Analytic Methods for Aviation Accident Research
by Aziida Nanyonga, Ugur Turhan and Graham Wild
Sci 2025, 7(3), 124; https://doi.org/10.3390/sci7030124 - 4 Sep 2025
Viewed by 676
Abstract
The aviation industry prioritizes safety and has embraced innovative approaches for both reactive and proactive safety measures. Machine learning (ML) has emerged as a useful tool for aviation safety. This systematic literature review explores ML applications for safety within the aviation industry over [...] Read more.
The aviation industry prioritizes safety and has embraced innovative approaches for both reactive and proactive safety measures. Machine learning (ML) has emerged as a useful tool for aviation safety. This systematic literature review explores ML applications for safety within the aviation industry over the past 25 years. Through a comprehensive search on Scopus and backward reference searches via Google Scholar, 87 of the most relevant papers were identified. The investigation focused on the application context, ML techniques employed, data sources, and the implications of contextual nuances for safety analysis outcomes. ML techniques have been effective for post-accident analysis, predictive, and real-time incident detection across diverse aviation scenarios. Supervised, unsupervised, and semi-supervised learning methods, including neural networks, decision trees, support vector machines, and deep learning models, have all been applied for analyzing accidents, identifying patterns, and forecasting potential incidents. Notably, data sources such as the Aviation Safety Reporting System (ASRS) and the National Transportation Safety Board (NTSB) datasets were the most used. Transparency, fairness, and bias mitigation emerge as critical factors that shape the credibility and acceptance of ML-based safety research in aviation. The review revealed seven recommended future research directions: (1) interpretable AI; (2) real-time prediction; (3) hybrid models; (4) handling of unbalanced datasets; (5) privacy and data security; (6) human–machine interface for safety professionals; (7) regulatory implications. These directions provide a blueprint for further ML-based aviation safety research. This review underscores the role of ML applications in shaping aviation safety practices, thereby enhancing safety for all stakeholders. It serves as a constructive and cautionary guide for researchers, practitioners, and decision-makers, emphasizing the value of ML when used appropriately to transform aviation safety to be more data-driven and proactive. Full article
Show Figures

Figure 1

21 pages, 2213 KB  
Review
AI in Dentistry: Innovations, Ethical Considerations, and Integration Barriers
by Tao-Yuan Liu, Kun-Hua Lee, Arvind Mukundan, Riya Karmakar, Hardik Dhiman and Hsiang-Chen Wang
Bioengineering 2025, 12(9), 928; https://doi.org/10.3390/bioengineering12090928 - 29 Aug 2025
Viewed by 1150
Abstract
Background/Objectives: Artificial Intelligence (AI) is improving dentistry through increased accuracy in diagnostics, planning, and workflow automation. AI tools, including machine learning (ML) and deep learning (DL), are being adopted in oral medicine to improve patient care, efficiency, and lessen clinicians’ workloads. AI in [...] Read more.
Background/Objectives: Artificial Intelligence (AI) is improving dentistry through increased accuracy in diagnostics, planning, and workflow automation. AI tools, including machine learning (ML) and deep learning (DL), are being adopted in oral medicine to improve patient care, efficiency, and lessen clinicians’ workloads. AI in dentistry, despite its use, faces an issue of acceptance, with its obstacles including ethical, legal, and technological ones. In this article, a review of current AI use in oral medicine, new technology development, and integration barriers is discussed. Methods: A narrative review of peer-reviewed articles in databases such as PubMed, Scopus, Web of Science, and Google Scholar was conducted. Peer-reviewed articles over the last decade, such as AI application in diagnostic imaging, predictive analysis, real-time documentation, and workflows automation, were examined. Besides, improvements in AI models and critical impediments such as ethical concerns and integration barriers were addressed in the review. Results: AI has exhibited strong performance in radiographic diagnostics, with high accuracy in reading cone-beam computed tomography (CBCT) scan, intraoral photographs, and radiographs. AI-facilitated predictive analysis has enhanced personalized care planning and disease avoidance, and AI-facilitated automation of workflows has maximized administrative workflows and patient record management. U-Net-based segmentation models exhibit sensitivities and specificities of approximately 93.0% and 88.0%, respectively, in identifying periapical lesions on 2D CBCT slices. TensorFlow-based workflow modules, integrated into vendor platforms such as Planmeca Romexis, can reduce the processing time of patient records by a minimum of 30 percent in standard practice. The privacy-preserving federated learning architecture has attained cross-site model consistency exceeding 90% accuracy, enabling collaborative training among diverse dentistry clinics. Explainable AI (XAI) and federated learning have enhanced AI transparency and security with technological advancement, but barriers include concerns regarding data privacy, AI bias, gaps in AI regulating, and training clinicians. Conclusions: AI is revolutionizing dentistry with enhanced diagnostic accuracy, predictive planning, and efficient administration automation. With technology developing AI software even smarter, ethics and legislation have to follow in order to allow responsible AI integration. To make AI in dental care work at its best, future research will have to prioritize AI interpretability, developing uniform protocols, and collaboration between specialties in order to allow AI’s full potential in dentistry. Full article
Show Figures

Figure 1

40 pages, 1946 KB  
Review
Climate-Resilient Crops: Integrating AI, Multi-Omics, and Advanced Phenotyping to Address Global Agricultural and Societal Challenges
by Doni Thingujam, Sandeep Gouli, Sachin Promodh Cooray, Katie Busch Chandran, Seth Bradley Givens, Renganathan Vellaichamy Gandhimeyyan, Zhengzhi Tan, Yiqing Wang, Keerthi Patam, Sydney A. Greer, Ranju Acharya, David Octor Moseley, Nesma Osman, Xin Zhang, Megan E. Brooker, Mary Love Tagert, Mark J. Schafer, Changyoon Jeong, Kevin Flynn Hoffseth, Raju Bheemanahalli, J. Michael Wyss, Nuwan Kumara Wijewardane, Jong Hyun Ham and M. Shahid Mukhtaradd Show full author list remove Hide full author list
Plants 2025, 14(17), 2699; https://doi.org/10.3390/plants14172699 - 29 Aug 2025
Viewed by 1358
Abstract
Drought and excess ambient temperature intensify abiotic and biotic stresses on agriculture, threatening food security and economic stability. The development of climate-resilient crops is crucial for sustainable, efficient farming. This review highlights the role of multi-omics encompassing genomics, transcriptomics, proteomics, metabolomics, and epigenomics [...] Read more.
Drought and excess ambient temperature intensify abiotic and biotic stresses on agriculture, threatening food security and economic stability. The development of climate-resilient crops is crucial for sustainable, efficient farming. This review highlights the role of multi-omics encompassing genomics, transcriptomics, proteomics, metabolomics, and epigenomics in identifying genetic pathways for stress resilience. Advanced phenomics, using drones and hyperspectral imaging, can accelerate breeding programs by enabling high-throughput trait monitoring. Artificial intelligence (AI) and machine learning (ML) enhance these efforts by analyzing large-scale omics and phenotypic data, predicting stress tolerance traits, and optimizing breeding strategies. Additionally, plant-associated microbiomes contribute to stress tolerance and soil health through bioinoculants and synthetic microbial communities. Beyond agriculture, these advancements have broad societal, economic, and educational impacts. Climate-resilient crops can enhance food security, reduce hunger, and support vulnerable regions. AI-driven tools and precision agriculture empower farmers, improving livelihoods and equitable technology access. Educating teachers, students, and future generations fosters awareness and equips them to address climate challenges. Economically, these innovations reduce financial risks, stabilize markets, and promote long-term agricultural sustainability. These cutting-edge approaches can transform agriculture by integrating AI, multi-omics, and advanced phenotyping, ensuring a resilient and sustainable global food system amid climate change. Full article
(This article belongs to the Section Crop Physiology and Crop Production)
Show Figures

Figure 1

30 pages, 815 KB  
Review
Next-Generation Machine Learning in Healthcare Fraud Detection: Current Trends, Challenges, and Future Research Directions
by Kamran Razzaq and Mahmood Shah
Information 2025, 16(9), 730; https://doi.org/10.3390/info16090730 - 25 Aug 2025
Viewed by 1887
Abstract
The growing complexity and size of healthcare systems have rendered fraud detection increasingly challenging; however, the current literature lacks a holistic view of the latest machine learning (ML) techniques with practical implementation concerns. The present study addresses this gap by highlighting the importance [...] Read more.
The growing complexity and size of healthcare systems have rendered fraud detection increasingly challenging; however, the current literature lacks a holistic view of the latest machine learning (ML) techniques with practical implementation concerns. The present study addresses this gap by highlighting the importance of machine learning (ML) in preventing and mitigating healthcare fraud, evaluating recent advancements, investigating implementation barriers, and exploring future research dimensions. To further address the limited research on the evaluation of machine learning (ML) and hybrid approaches, this study considers a broad spectrum of ML techniques, including supervised ML, unsupervised ML, deep learning, and hybrid ML approaches such as SMOTE-ENN, explainable AI, federated learning, and ensemble learning. The study also explored their potential use in enhancing fraud detection in imbalanced and multidimensional datasets. A significant finding of the study was the identification of commonly employed datasets, such as Medicare, the List of Excluded Individuals and Entities (LEIE), and Kaggle datasets, which serve as a baseline for evaluating machine learning (ML) models. The study’s findings comprehensively identify the challenges of employing machine learning (ML) in healthcare systems, including data quality, system scalability, regulatory compliance, and resource constraints. The study provides actionable insights, such as model interpretability to enable regulatory compliance and federated learning for confidential data sharing, which is particularly relevant for policymakers, healthcare providers, and insurance companies that intend to deploy a robust, scalable, and secure fraud detection infrastructure. The study presents a comprehensive framework for enhancing real-time healthcare fraud detection through self-learning, interpretable, and safe machine learning (ML) infrastructures, integrating theoretical advancements with practical application needs. Full article
Show Figures

Figure 1

19 pages, 991 KB  
Article
Enhancing Machine Learning-Based DDoS Detection Through Hyperparameter Optimization
by Shao-Rui Chen, Shiang-Jiun Chen and Wen-Bin Hsieh
Electronics 2025, 14(16), 3319; https://doi.org/10.3390/electronics14163319 - 20 Aug 2025
Viewed by 855
Abstract
In recent years, the occurrence and complexity of Distributed Denial of Service (DDoS) attacks have escalated significantly, posing threats to the availability, performance, and security of networked systems. With the rapid progression of Artificial Intelligence (AI) and Machine Learning (ML) technologies, attackers can [...] Read more.
In recent years, the occurrence and complexity of Distributed Denial of Service (DDoS) attacks have escalated significantly, posing threats to the availability, performance, and security of networked systems. With the rapid progression of Artificial Intelligence (AI) and Machine Learning (ML) technologies, attackers can leverage intelligent tools to automate and amplify DDoS attacks with minimal human intervention. The increasing sophistication of such attacks highlights the pressing need for more robust and precise detection methodologies. This research proposes a method to enhance the effectiveness of ML models in detecting DDoS attacks based on hyperparameter tuning. By optimizing model parameters, the proposed approach is going to enhance the performance of ML models in identifying DDoS attacks. The CIC-DDoS2019 dataset is utilized in this study as it offers a comprehensive set of real-world DDoS attack scenarios across various protocols and services. The proposed methodology comprises key stages, including data preprocessing, data splitting, and model training, validation, and testing. Three ML models are trained and tuned using an adaptive GridSearchCV (Cross Validation) strategy to identify optimal parameter configurations. The results demonstrate that our method significantly improves performance and efficiency compared with the general GridSearchCV. The SVM model achieves 99.87% testing accuracy and requires approximately 28% less execution time than the general GridSearchCV. The LR model achieves 99.6830% testing accuracy with an execution time of 16.90 s, maintaining the same testing accuracy but reducing the execution time by about 22.8%. The KNN model achieves 99.8395% testing accuracy and 2388.89 s of execution time, also preserving accuracy while decreasing the execution time by approximately 63%. These results indicate that our approach enhances DDoS detection performance and efficiency, offering novel insights into the practical application of hyperparameter tuning for improving ML model performance in real-world scenarios. Full article
(This article belongs to the Special Issue Advancements in AI-Driven Cybersecurity and Securing AI Systems)
Show Figures

Figure 1

27 pages, 1393 KB  
Review
A Data-Centric Framework for Implementing Artificial Intelligence in Smart Manufacturing
by Priyanka Mudgal
Electronics 2025, 14(16), 3304; https://doi.org/10.3390/electronics14163304 - 20 Aug 2025
Viewed by 1167
Abstract
The manufacturing segment is undergoing a rapid transformation as manufacturers integrate artificial intelligence (AI) and machine learning (ML). These technologies increasingly rely on data-driven architectures, which enable manufacturers to manage large volumes of data from machines, sensors, and other sources. As a result, [...] Read more.
The manufacturing segment is undergoing a rapid transformation as manufacturers integrate artificial intelligence (AI) and machine learning (ML). These technologies increasingly rely on data-driven architectures, which enable manufacturers to manage large volumes of data from machines, sensors, and other sources. As a result, they optimize operations, increase productivity, and reduce costs. This paper examines the role of AI in manufacturing through the lens of data-driven architecture. It focuses on the key components, challenges, and opportunities involved in implementing these systems. The paper explores various data types and architecture models that support AI-driven manufacturing, with an emphasis on real-time analytics. It highlights key use cases in manufacturing, including predictive maintenance, quality control, and supply chain optimization, and identifies the essential components required to implement AI successfully in smart manufacturing. The paper emphasizes the critical importance of data governance, security, and scalability in developing resilient and future-proof AI systems. Finally, it reviews a data-centric framework with essential components for manufacturers aiming to leverage these technologies to drive sustained growth and innovation. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

58 pages, 901 KB  
Review
A Comprehensive Evaluation of IoT Cloud Platforms: A Feature-Driven Review with a Decision-Making Tool
by Ioannis Chrysovalantis Panagou, Stylianos Katsoulis, Evangelos Nannos, Fotios Zantalis and Grigorios Koulouras
Sensors 2025, 25(16), 5124; https://doi.org/10.3390/s25165124 - 18 Aug 2025
Viewed by 1118
Abstract
The rapid proliferation of Internet of Things (IoT) devices has led to a growing ecosystem of Cloud Platforms designed to manage, process, and analyze IoT data. Selecting the optimal IoT Cloud Platform is a critical decision for businesses and developers, yet it presents [...] Read more.
The rapid proliferation of Internet of Things (IoT) devices has led to a growing ecosystem of Cloud Platforms designed to manage, process, and analyze IoT data. Selecting the optimal IoT Cloud Platform is a critical decision for businesses and developers, yet it presents a significant challenge due to the diverse range of features, pricing models, and architectural nuances. This manuscript presents a comprehensive, feature-driven review of twelve prominent IoT Cloud Platforms, including AWS IoT Core, IoT on Google Cloud Platform, and Microsoft Azure IoT Hub among others. We meticulously analyze each platform across nine key features: Security, Scalability and Performance, Interoperability, Data Analytics and AI/ML Integration, Edge Computing Support, Pricing Models and Cost-effectiveness, Developer Tools and SDK Support, Compliance and Standards, and Over-The-Air (OTA) Update Capabilities. For each feature, platforms are quantitatively scored (1–10) based on an in-depth assessment of their capabilities and offerings at the time of research. Recognizing the dynamic nature of this domain, we present our findings in a two-dimensional table to provide a clear comparative overview. Furthermore, to empower users in their decision-making process, we introduce a novel, web-based tool for evaluating IoT Cloud Platforms, called the “IoT Cloud Platforms Selector”. This interactive tool allows users to assign personalized weights to each feature, dynamically calculating and displaying weighted scores for each platform, thereby facilitating a tailored selection process. This research provides a valuable resource for researchers, practitioners, and organizations seeking to navigate the complex landscape of IoT Cloud Platforms. Full article
Show Figures

Figure 1

29 pages, 1386 KB  
Article
A Hybrid Zero Trust Deployment Model for Securing O-RAN Architecture in 6G Networks
by Max Hashem Eiza, Brian Akwirry, Alessandro Raschella, Michael Mackay and Mukesh Kumar Maheshwari
Future Internet 2025, 17(8), 372; https://doi.org/10.3390/fi17080372 - 18 Aug 2025
Viewed by 570
Abstract
The evolution toward sixth generation (6G) wireless networks promises higher performance, greater flexibility, and enhanced intelligence. However, it also introduces a substantially enlarged attack surface driven by open, disaggregated, and multi-vendor Open RAN (O-RAN) architectures that will be utilised in 6G networks. This [...] Read more.
The evolution toward sixth generation (6G) wireless networks promises higher performance, greater flexibility, and enhanced intelligence. However, it also introduces a substantially enlarged attack surface driven by open, disaggregated, and multi-vendor Open RAN (O-RAN) architectures that will be utilised in 6G networks. This paper addresses the urgent need for a practical Zero Trust (ZT) deployment model tailored to O-RAN specification. To do so, we introduce a novel hybrid ZT deployment model that establishes the trusted foundation for AI/ML-driven security in O-RAN, integrating macro-level enclave segmentation with micro-level application sandboxing for xApps/rApps. In our model, the Policy Decision Point (PDP) centrally manages dynamic policies, while distributed Policy Enforcement Points (PEPs) reside in logical enclaves, agents, and gateways to enable per-session, least-privilege access control across all O-RAN interfaces. We demonstrate feasibility via a Proof of Concept (PoC) implemented with Kubernetes and Istio and based on the NIST Policy Machine (PM). The PoC illustrates how pods can represent enclaves and sidecar proxies can embody combined agent/gateway functions. Performance discussion indicates that enclave-based deployment adds 1–10 ms of additional per-connection latency while CPU/memory overhead from running a sidecar proxy per enclave is approximately 5–10% extra utilisation, with each proxy consuming roughly 100–200 MB of RAM. Full article
(This article belongs to the Special Issue Secure and Trustworthy Next Generation O-RAN Optimisation)
Show Figures

Figure 1

29 pages, 3542 KB  
Review
Digital Twins, AI, and Cybersecurity in Additive Manufacturing: A Comprehensive Review of Current Trends and Challenges
by Md Sazol Ahmmed, Laraib Khan, Muhammad Arif Mahmood and Frank Liou
Machines 2025, 13(8), 691; https://doi.org/10.3390/machines13080691 - 6 Aug 2025
Viewed by 1673
Abstract
The development of Industry 4.0 has accelerated the adoption of sophisticated technologies, including Digital Twins (DTs), Artificial Intelligence (AI), and cybersecurity, within Additive Manufacturing (AM). Enabling real-time monitoring, process optimization, predictive maintenance, and secure data management can redefine conventional manufacturing paradigms. Although their [...] Read more.
The development of Industry 4.0 has accelerated the adoption of sophisticated technologies, including Digital Twins (DTs), Artificial Intelligence (AI), and cybersecurity, within Additive Manufacturing (AM). Enabling real-time monitoring, process optimization, predictive maintenance, and secure data management can redefine conventional manufacturing paradigms. Although their individual importance is increasing, a consistent understanding of how these technologies interact and collectively improve AM procedures is lacking. Focusing on the integration of digital twins (DTs), modular AI, and cybersecurity in AM, this review presents a comprehensive analysis of over 137 research publications from Scopus, Web of Science, Google Scholar, and ResearchGate. The publications are categorized into three thematic groups, followed by an analysis of key findings. Finally, the study identifies research gaps and proposes detailed recommendations along with a framework for future research. The study reveals that traditional AM processes have undergone significant transformations driven by digital threads, digital threads (DTs), and AI. However, this digitalization introduces vulnerabilities, leaving AM systems prone to cyber-physical attacks. Emerging advancements in AI, Machine Learning (ML), and Blockchain present promising solutions to mitigate these challenges. This paper is among the first to comprehensively summarize and evaluate the advancements in AM, emphasizing the integration of DTs, Modular AI, and cybersecurity strategies. Full article
(This article belongs to the Special Issue Neural Networks Applied in Manufacturing and Design)
Show Figures

Figure 1

31 pages, 1583 KB  
Article
Ensuring Zero Trust in GDPR-Compliant Deep Federated Learning Architecture
by Zahra Abbas, Sunila Fatima Ahmad, Adeel Anjum, Madiha Haider Syed, Saif Ur Rehman Malik and Semeen Rehman
Computers 2025, 14(8), 317; https://doi.org/10.3390/computers14080317 - 4 Aug 2025
Viewed by 1058
Abstract
Deep Federated Learning (DFL) revolutionizes machine learning (ML) by enabling collaborative model training across diverse, decentralized data sources without direct data sharing, emphasizing user privacy and data sovereignty. Despite its potential, DFL’s application in sensitive sectors is hindered by challenges in meeting rigorous [...] Read more.
Deep Federated Learning (DFL) revolutionizes machine learning (ML) by enabling collaborative model training across diverse, decentralized data sources without direct data sharing, emphasizing user privacy and data sovereignty. Despite its potential, DFL’s application in sensitive sectors is hindered by challenges in meeting rigorous standards like the GDPR, with traditional setups struggling to ensure compliance and maintain trust. Addressing these issues, our research introduces an innovative Zero Trust-based DFL architecture designed for GDPR compliant systems, integrating advanced security and privacy mechanisms to ensure safe and transparent cross-node data processing. Our base paper proposed the basic GDPR-Compliant DFL Architecture. Now we validate the previously proposed architecture by formally verifying it using High-Level Petri Nets (HLPNs). This Zero Trust-based framework facilitates secure, decentralized model training without direct data sharing. Furthermore, we have also implemented a case study using the MNIST and CIFAR-10 datasets to evaluate the existing approach with the proposed Zero Trust-based DFL methodology. Our experiments confirmed its effectiveness in enhancing trust, complying with GDPR, and promoting DFL adoption in privacy-sensitive areas, achieving secure, ethical Artificial Intelligence (AI) with transparent and efficient data processing. Full article
Show Figures

Figure 1

21 pages, 2065 KB  
Article
Enhancing Security in 5G and Future 6G Networks: Machine Learning Approaches for Adaptive Intrusion Detection and Prevention
by Konstantinos Kalodanis, Charalampos Papapavlou and Georgios Feretzakis
Future Internet 2025, 17(7), 312; https://doi.org/10.3390/fi17070312 - 18 Jul 2025
Viewed by 1064
Abstract
The evolution from 4G to 5G—and eventually to the forthcoming 6G networks—has revolutionized wireless communications by enabling high-speed, low-latency services that support a wide range of applications, including the Internet of Things (IoT), smart cities, and critical infrastructures. However, the unique characteristics of [...] Read more.
The evolution from 4G to 5G—and eventually to the forthcoming 6G networks—has revolutionized wireless communications by enabling high-speed, low-latency services that support a wide range of applications, including the Internet of Things (IoT), smart cities, and critical infrastructures. However, the unique characteristics of these networks—extensive connectivity, device heterogeneity, and architectural flexibility—impose significant security challenges. This paper introduces a comprehensive framework for enhancing the security of current and emerging wireless networks by integrating state-of-the-art machine learning (ML) techniques into intrusion detection and prevention systems. It also thoroughly explores the key aspects of wireless network security, including architectural vulnerabilities in both 5G and future 6G networks, novel ML algorithms tailored to address evolving threats, privacy-preserving mechanisms, and regulatory compliance with the EU AI Act. Finally, a Wireless Intrusion Detection Algorithm (WIDA) is proposed, demonstrating promising results in improving wireless network security. Full article
(This article belongs to the Special Issue Advanced 5G and Beyond Networks)
Show Figures

Figure 1

24 pages, 2173 KB  
Article
A Novel Ensemble of Deep Learning Approach for Cybersecurity Intrusion Detection with Explainable Artificial Intelligence
by Abdullah Alabdulatif
Appl. Sci. 2025, 15(14), 7984; https://doi.org/10.3390/app15147984 - 17 Jul 2025
Cited by 1 | Viewed by 1650
Abstract
In today’s increasingly interconnected digital world, cyber threats have grown in frequency and sophistication, making intrusion detection systems a critical component of modern cybersecurity frameworks. Traditional IDS methods, often based on static signatures and rule-based systems, are no longer sufficient to detect and [...] Read more.
In today’s increasingly interconnected digital world, cyber threats have grown in frequency and sophistication, making intrusion detection systems a critical component of modern cybersecurity frameworks. Traditional IDS methods, often based on static signatures and rule-based systems, are no longer sufficient to detect and respond to complex and evolving attacks. To address these challenges, Artificial Intelligence and machine learning have emerged as powerful tools for enhancing the accuracy, adaptability, and automation of IDS solutions. This study presents a novel, hybrid ensemble learning-based intrusion detection framework that integrates deep learning and traditional ML algorithms with explainable artificial intelligence for real-time cybersecurity applications. The proposed model combines an Artificial Neural Network and Support Vector Machine as base classifiers and employs a Random Forest as a meta-classifier to fuse predictions, improving detection performance. Recursive Feature Elimination is utilized for optimal feature selection, while SHapley Additive exPlanations (SHAP) provide both global and local interpretability of the model’s decisions. The framework is deployed using a Flask-based web interface in the Amazon Elastic Compute Cloud environment, capturing live network traffic and offering sub-second inference with visual alerts. Experimental evaluations using the NSL-KDD dataset demonstrate that the ensemble model outperforms individual classifiers, achieving a high accuracy of 99.40%, along with excellent precision, recall, and F1-score metrics. This research not only enhances detection capabilities but also bridges the trust gap in AI-powered security systems through transparency. The solution shows strong potential for application in critical domains such as finance, healthcare, industrial IoT, and government networks, where real-time and interpretable threat detection is vital. Full article
Show Figures

Figure 1

40 pages, 2206 KB  
Review
Toward Generative AI-Based Intrusion Detection Systems for the Internet of Vehicles (IoV)
by Isra Mahmoudi, Djallel Eddine Boubiche, Samir Athmani, Homero Toral-Cruz and Freddy I. Chan-Puc
Future Internet 2025, 17(7), 310; https://doi.org/10.3390/fi17070310 - 17 Jul 2025
Cited by 2 | Viewed by 1303
Abstract
The increasing complexity and scale of Internet of Vehicles (IoV) networks pose significant security challenges, necessitating the development of advanced intrusion detection systems (IDS). Traditional IDS approaches, such as rule-based and signature-based methods, are often inadequate in detecting novel and sophisticated attacks due [...] Read more.
The increasing complexity and scale of Internet of Vehicles (IoV) networks pose significant security challenges, necessitating the development of advanced intrusion detection systems (IDS). Traditional IDS approaches, such as rule-based and signature-based methods, are often inadequate in detecting novel and sophisticated attacks due to their limited adaptability and dependency on predefined patterns. To overcome these limitations, machine learning (ML) and deep learning (DL)-based IDS have been introduced, offering better generalization and the ability to learn from data. However, these models can still struggle with zero-day attacks, require large volumes of labeled data, and may be vulnerable to adversarial examples. In response to these challenges, Generative AI-based IDS—leveraging models such as Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), and Transformers—have emerged as promising solutions that offer enhanced adaptability, synthetic data generation for training, and improved detection capabilities for evolving threats. This survey provides an overview of IoV architecture, vulnerabilities, and classical IDS techniques while focusing on the growing role of Generative AI in strengthening IoV security. It discusses the current landscape, highlights the key challenges, and outlines future research directions aimed at building more resilient and intelligent IDS for the IoV ecosystem. Full article
Show Figures

Figure 1

Back to TopTop