Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (193)

Search Parameters:
Keywords = Non-IID

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 3861 KB  
Article
DRL-Based Adaptive Time Threshold Client Selection FL
by Sreyleak Sam, Taikuong Iv, Rothny Mom, Seungwoo Kang, Inseok Song, Seyha Ros, Sovanndoeur Riel and Seokhoon Kim
Symmetry 2025, 17(10), 1700; https://doi.org/10.3390/sym17101700 (registering DOI) - 10 Oct 2025
Abstract
Federated Learning (FL) has been proposed as a new machine learning paradigm to ensure data privacy by training the model in a decentralized manner. However, FL is challenged by device heterogeneity, asymmetric data contribution, and imbalanced datasets, which complicate system control and hinder [...] Read more.
Federated Learning (FL) has been proposed as a new machine learning paradigm to ensure data privacy by training the model in a decentralized manner. However, FL is challenged by device heterogeneity, asymmetric data contribution, and imbalanced datasets, which complicate system control and hinder performance due to long waiting times for aggregation. To tackle the FL challenges, we propose Adaptive Time Threshold Client Selection using DRL (ATCS-FL) to adjust the time threshold (α) in each communication round based on computing and resource capacity of each device and the volume of data updates. The Double Deep Q-Network (DDQN) model determines the appropriate α, according to the variations in local training time that achieves performance improvement alongside latency reduction. Based on the α, the server selects a subset of clients with adequate resources that can finish training within the α for participating in the training process. Our approach dynamically adjusts the α and adaptively selects the number of clients, effectively mitigates the impact of heterogeneous training speeds and significantly enhances communication efficiency. Our experiment utilizes CIFAR-10 and MNIST benchmarked datasets for image classification training with convolutional neural networks across non-IID distributed levels in FL. Specifically, ATCS-FL demonstrates performance improvement and latency reduction of 77% and 75%, respectively, compared to FedProx and FLASH-RL. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

29 pages, 2430 KB  
Article
A Federated Fine-Tuning Framework for Large Language Models via Graph Representation Learning and Structural Segmentation
by Yuxin Dong, Ruotong Wang, Guiran Liu, Binrong Zhu, Xiaohan Cheng, Zijun Gao and Pengbin Feng
Mathematics 2025, 13(19), 3201; https://doi.org/10.3390/math13193201 - 6 Oct 2025
Viewed by 215
Abstract
This paper focuses on the efficient fine-tuning of large language models within the federated learning framework. To address the performance bottlenecks caused by multi-source heterogeneity and structural inconsistency, a structure-aware federated fine-tuning method is proposed. The method incorporates a graph representation module (GRM) [...] Read more.
This paper focuses on the efficient fine-tuning of large language models within the federated learning framework. To address the performance bottlenecks caused by multi-source heterogeneity and structural inconsistency, a structure-aware federated fine-tuning method is proposed. The method incorporates a graph representation module (GRM) to model internal structural relationships within text and employs a segmentation mechanism (SM) to reconstruct and align semantic structures across inputs, thereby enhancing structural robustness and generalization under non-IID (non-Independent and Identically Distributed) settings. During training, the method ensures data locality and integrates structural pruning with gradient encryption (SPGE) strategies to balance privacy preservation and communication efficiency. Compared with representative federated fine-tuning baselines such as FedNLP and FedPrompt, the proposed method achieves consistent accuracy and F1-score improvements across multiple tasks. To evaluate the effectiveness of the proposed method, extensive comparative experiments are conducted across tasks of text classification, named entity recognition, and question answering, using multiple datasets with diverse structures and heterogeneity levels. Experimental results show that the proposed approach significantly outperforms existing federated fine-tuning strategies on most tasks, achieving higher performance while preserving privacy, and demonstrating strong practical applicability and generalization potential. Full article
(This article belongs to the Special Issue Privacy-Preserving Machine Learning in Large Language Models (LLMs))
Show Figures

Figure 1

19 pages, 1830 KB  
Article
Addressing Non-IID with Data Quantity Skew in Federated Learning
by Narisu Cha and Long Chang
Information 2025, 16(10), 861; https://doi.org/10.3390/info16100861 (registering DOI) - 4 Oct 2025
Viewed by 129
Abstract
Non-IID is one of the key challenges in federated learning. Data heterogeneity may lead to slower convergence, reduced accuracy, and more training rounds. To address the common Non-IID data distribution problem in federated learning, we propose a comprehensive dynamic optimization approach based on [...] Read more.
Non-IID is one of the key challenges in federated learning. Data heterogeneity may lead to slower convergence, reduced accuracy, and more training rounds. To address the common Non-IID data distribution problem in federated learning, we propose a comprehensive dynamic optimization approach based on existing methods. It leverages MAP estimation of the Dirichlet parameter β to dynamically adjust the regularization coefficient μ and introduces orthogonal gradient coefficients Δi to mitigate gradient interference among different classes. The approach is compatible with existing federated learning frameworks and can be easily integrated. Achieves significant accuracy improvements in both mildly and severely Non-IID scenarios while maintaining a strong performance lower bound. Full article
Show Figures

Figure 1

22 pages, 3386 KB  
Article
Edge-AI Enabled Resource Allocation for Federated Learning in Cell-Free Massive MIMO-Based 6G Wireless Networks: A Joint Optimization Perspective
by Chen Yang and Quanrong Fang
Electronics 2025, 14(19), 3938; https://doi.org/10.3390/electronics14193938 - 4 Oct 2025
Viewed by 199
Abstract
The advent of sixth-generation (6G) wireless networks and cell-free massive multiple-input multiple-output (MIMO) architectures underscores the need for efficient resource allocation to support federated learning (FL) at the network edge. Existing approaches often treat communication, computation, and learning in isolation, overlooking dynamic heterogeneity [...] Read more.
The advent of sixth-generation (6G) wireless networks and cell-free massive multiple-input multiple-output (MIMO) architectures underscores the need for efficient resource allocation to support federated learning (FL) at the network edge. Existing approaches often treat communication, computation, and learning in isolation, overlooking dynamic heterogeneity and fairness, which leads to degraded performance in large-scale deployments. To address this gap, we propose a joint optimization framework that integrates communication–computation co-design, fairness-aware aggregation, and a hybrid strategy combining convex relaxation with deep reinforcement learning. Extensive experiments on benchmark vision datasets and real-world wireless traces demonstrate that the framework achieves up to 23% higher accuracy, 18% lower latency, and 21% energy savings compared with state-of-the-art baselines. These findings advance joint optimization in federated learning (FL) and demonstrate scalability for 6G applications. Full article
Show Figures

Figure 1

38 pages, 6431 KB  
Article
FedResilience: A Federated Classification System to Ensure Critical LTE Communications During Natural Disasters
by Alvaro Acuña-Avila, Christian Fernández-Campusano, Héctor Kaschel and Raúl Carrasco
Systems 2025, 13(10), 866; https://doi.org/10.3390/systems13100866 - 2 Oct 2025
Viewed by 192
Abstract
Natural disasters can disrupt communication services, leading to severe consequences in emergencies. Maintaining connectivity and communication quality during crises is crucial for coordinating rescues, providing critical information, and ensuring reliable and secure service. This study proposes FedResilience, a Federated Learning (FL) system for [...] Read more.
Natural disasters can disrupt communication services, leading to severe consequences in emergencies. Maintaining connectivity and communication quality during crises is crucial for coordinating rescues, providing critical information, and ensuring reliable and secure service. This study proposes FedResilience, a Federated Learning (FL) system for classifying Long-Term Evolution (LTE) network coverage in both normal operation and natural disaster scenarios. A three-tier architecture is implemented: (i) edge nodes, (ii) a central aggregation server, and (iii) a batch processing interface. Five FL aggregation methods (FedAvg, FedProx, FedAdam, FedYogi, and FedAdagrad) were evaluated under normal conditions and disaster simulations. The results show that FedAdam outperforms the other methods under normal conditions, achieving an F1 score of 0.7271 and a Global System Adherence (SAglobal) of 91.51%. In disaster scenarios, FedProx was superior, with an F1 score of 0.7946 and SAglobal of 61.73%. The innovation in this study is the introduction of the System Adherence (SA) metric to evaluate the predictive fidelity of the model. The system demonstrated robustness against Non-Independent and Identically Distributed (non-IID) data distributions and the ability to handle significant class imbalances. FedResilience serves as a tool for companies to implement automated corrective actions, contributing to the predictive maintenance of LTE networks through FL while preserving data privacy. Full article
(This article belongs to the Special Issue Data-Driven Decision Making for Complex Systems)
Show Figures

Figure 1

24 pages, 1217 KB  
Article
Adaptive Multimodal Fusion in Vertical Federated Learning for Decentralized Glaucoma Screening
by Ayesha Jabbar, Jianjun Huang, Muhammad Kashif Jabbar and Asad Ali
Brain Sci. 2025, 15(9), 990; https://doi.org/10.3390/brainsci15090990 - 14 Sep 2025
Viewed by 483
Abstract
Background/Objectives: Early and accurate detection of glaucoma is vital for preventing irreversible vision loss, yet traditional diagnostic approaches relying solely on unimodal retinal imaging are limited by data sparsity and constrained context. Furthermore, real-world clinical data are often fragmented across institutions under strict [...] Read more.
Background/Objectives: Early and accurate detection of glaucoma is vital for preventing irreversible vision loss, yet traditional diagnostic approaches relying solely on unimodal retinal imaging are limited by data sparsity and constrained context. Furthermore, real-world clinical data are often fragmented across institutions under strict privacy regulations, posing significant challenges for centralized machine learning methods. Methods: To address these barriers, this study proposes a novel Quality Aware Vertical Federated Learning (QAVFL) framework for decentralized multimodal glaucoma detection. The proposed system dynamically integrates clinical text, retinal fundus images, and biomedical signal data through modality-specific encoders, followed by a Fusion Attention Module (FAM) that adaptively weighs the reliability and contribution of each modality. Unlike conventional early fusion or horizontal federated learning methods, QAVFL operates in vertically partitioned environments and employs secure aggregation mechanisms incorporating homomorphic encryption and differential privacy to preserve patient confidentiality. Results: Extensive experiments conducted under heterogeneous non-IID settings demonstrate that QAVFL achieves an accuracy of 98.6%, a recall of 98.6%, an F1-score of 97.0%, and an AUC of 0.992, outperforming unimodal and early fusion baselines with statistically significant improvements (p < 0.01). Conclusions: The findings validate the effectiveness of dynamic multimodal fusion under privacy-preserving decentralized learning and highlight the scalability and clinical applicability of QAVFL for robust glaucoma screening across fragmented healthcare environments. Full article
Show Figures

Figure 1

20 pages, 3787 KB  
Article
Federated Learning for XSS Detection: Analysing OOD, Non-IID Challenges, and Embedding Sensitivity
by Bo Wang, Imran Khan, Martin White and Natalia Beloff
Electronics 2025, 14(17), 3483; https://doi.org/10.3390/electronics14173483 - 31 Aug 2025
Viewed by 555
Abstract
This paper investigates federated learning (FL) for cross-site scripting (XSS) detection under out-of-distribution (OOD) drift. Real-world XSS traffic involves fragmented attacks, heterogeneous benign inputs, and client imbalance, which erode conventional detectors. To simulate this, we construct two structurally divergent datasets: one with obfuscated, [...] Read more.
This paper investigates federated learning (FL) for cross-site scripting (XSS) detection under out-of-distribution (OOD) drift. Real-world XSS traffic involves fragmented attacks, heterogeneous benign inputs, and client imbalance, which erode conventional detectors. To simulate this, we construct two structurally divergent datasets: one with obfuscated, mixed-structure samples and another with syntactically regular examples, inducing structural OOD in both classes. We evaluate GloVe, GraphCodeBERT, and CodeT5 in both centralised and federated settings, tracking embedding drift and client variance. FL consistently improves OOD robustness by averaging decision boundaries from cleaner clients. Under FL scenarios, CodeT5 achieves the best aggregated performance (97.6% accuracy, 3.5% FPR), followed by GraphCodeBERT (96.8%, 4.7%), but is more stable on convergence. GloVe reaches a competitive final accuracy (96.2%) but exhibits a high instability across rounds, with a higher false positive rate (5.5%) and pronounced variance under FedProx. These results highlight the value and limits of structure-aware embeddings and support FL as a practical, privacy-preserving defence within OOD XSS scenarios. Full article
Show Figures

Figure 1

21 pages, 1408 KB  
Article
A Federated Learning Framework with Attention Mechanism and Gradient Compression for Time-Series Strategy Modeling
by Weiyuan Cui, Liman Zhang, Zhengxi Sun, Ziying Zhai, Xiahuan Cai, Zeyu Lan and Yan Zhan
Electronics 2025, 14(16), 3293; https://doi.org/10.3390/electronics14163293 - 19 Aug 2025
Viewed by 951
Abstract
With the increasing demand for privacy preservation and strategy sharing in global financial markets, traditional centralized modeling approaches have become inadequate for multi-institutional collaborative tasks, particularly under the realistic challenges of multi-source heterogeneity and non-independent and identically distributed (non-IID) data. To address these [...] Read more.
With the increasing demand for privacy preservation and strategy sharing in global financial markets, traditional centralized modeling approaches have become inadequate for multi-institutional collaborative tasks, particularly under the realistic challenges of multi-source heterogeneity and non-independent and identically distributed (non-IID) data. To address these limitations, a heterogeneity-aware Federated Quantitative Learning framework, Federated Quantitative Learning, is proposed to enable efficient cross-market financial strategy modeling while preserving data privacy. This framework integrates a Path Quality-Aware Aggregation Mechanism, a Gradient Clipping and Compression Module, and a Heterogeneity-Adaptive Optimizer, collectively enhancing model robustness and generalization. Empirical studies conducted on multiple real-world financial datasets, including those from the United States, European Union, and Asia-Pacific markets, demonstrate that Federated Quantitative Learning outperforms existing mainstream methods in key performance indicators such as annualized return, Sharpe ratio, maximum drawdown, and volatility. Under the full model configuration, Federated Quantitative Learning achieves an annualized return of 12.72%, a Sharpe ratio of 1.12, a maximum drawdown limited to 10.3%, and a reduced volatility of 9.7%, showing significant improvements over methods such as Federated Averaging, Federated Proximal Optimization, and Model-Contrastive Federated Learning. Moreover, module ablation studies and attention mechanism comparisons further validate the effectiveness of each core component in enhancing model performance. This study introduces a novel paradigm for secure strategy sharing and high-quality modeling in multi-institutional quantitative systems, offering practical feasibility and broad applicability. Full article
(This article belongs to the Special Issue Security and Privacy in Distributed Machine Learning)
Show Figures

Figure 1

23 pages, 3739 KB  
Article
FedDPA: Dynamic Prototypical Alignment for Federated Learning with Non-IID Data
by Oussama Akram Bensiah and Rohallah Benaboud
Electronics 2025, 14(16), 3286; https://doi.org/10.3390/electronics14163286 - 19 Aug 2025
Viewed by 769
Abstract
Federated learning (FL) has emerged as a powerful framework for decentralized model training, preserving data privacy by keeping datasets localized on distributed devices. However, data heterogeneity, characterized by significant variations in size, statistical distribution, and composition across client datasets, presents a persistent challenge [...] Read more.
Federated learning (FL) has emerged as a powerful framework for decentralized model training, preserving data privacy by keeping datasets localized on distributed devices. However, data heterogeneity, characterized by significant variations in size, statistical distribution, and composition across client datasets, presents a persistent challenge that impairs model performance, compromises generalization, and delays convergence. To address these issues, we propose FedDPA, a novel framework that utilizes dynamic prototypical alignment. FedDPA operates in three stages. First, it computes class-specific prototypes for each client to capture local data distributions, integrating them into an adaptive regularization mechanism. Next, a hierarchical aggregation strategy clusters and combines prototypes from similar clients, which reduces communication overhead and stabilizes model updates. Finally, a contrastive alignment process refines the global model by enforcing intra-class compactness and inter-class separation in the feature space. These mechanisms work in concert to mitigate client drift and enhance global model performance. We conducted extensive evaluations on standard classification benchmarks—EMNIST, FEMNIST, CIFAR-10, CIFAR-100, and Tiny-ImageNet 200—under various non-identically and independently distributed (non-IID) scenarios. The results demonstrate the superiority of FedDPA over state-of-the-art methods, including FedAvg, FedNH, and FedROD. Our findings highlight FedDPA’s enhanced effectiveness, stability, and adaptability, establishing it as a scalable and efficient solution to the critical problem of data heterogeneity in federated learning. Full article
Show Figures

Figure 1

20 pages, 2386 KB  
Article
Personalized Federated Learning Based on Dynamic Parameter Fusion and Prototype Alignment
by Ying Chen, Jing Wen, Shaoling Liang, Zhaofa Chen and Baohua Huang
Sensors 2025, 25(16), 5076; https://doi.org/10.3390/s25165076 - 15 Aug 2025
Viewed by 684
Abstract
To address the limitation of generalization of federated learning under non-independent and identically distributed (Non-IID) data, we propose FedDFPA, a personalized federated learning framework that integrates dynamic parameter fusion and prototype alignment. We design a class-wise dynamic parameter fusion mechanism that adaptively fuses [...] Read more.
To address the limitation of generalization of federated learning under non-independent and identically distributed (Non-IID) data, we propose FedDFPA, a personalized federated learning framework that integrates dynamic parameter fusion and prototype alignment. We design a class-wise dynamic parameter fusion mechanism that adaptively fuses global and local classifier parameters at the class level. It enables each client to preserve its reliable local knowledge while selectively incorporating beneficial global information for personalized classification. We introduce a prototype alignment mechanism based on both global and historical information. By aligning current local features with global prototypes and historical local prototypes, it improves cross-client semantic consistency and enhances the stability of local features. To evaluate the effectiveness of FedDFPA, we conduct extensive experiments on various Non-IID settings and client participation rates. Compared to the average performance of state-of-the-art algorithms, FedDFPA improves the average test accuracy by 3.59% and 4.71% under practical and pathological heterogeneous settings, respectively. These results confirm the effectiveness of our dual-mechanism design in achieving a better balance between personalization and collaboration in federated learning. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

34 pages, 2740 KB  
Article
Lightweight Anomaly Detection in Digit Recognition Using Federated Learning
by Anja Tanović and Ivan Mezei
Future Internet 2025, 17(8), 343; https://doi.org/10.3390/fi17080343 - 30 Jul 2025
Cited by 1 | Viewed by 847
Abstract
This study presents a lightweight autoencoder-based approach for anomaly detection in digit recognition using federated learning on resource-constrained embedded devices. We implement and evaluate compact autoencoder models on the ESP32-CAM microcontroller, enabling both training and inference directly on the device using 32-bit floating-point [...] Read more.
This study presents a lightweight autoencoder-based approach for anomaly detection in digit recognition using federated learning on resource-constrained embedded devices. We implement and evaluate compact autoencoder models on the ESP32-CAM microcontroller, enabling both training and inference directly on the device using 32-bit floating-point arithmetic. The system is trained on a reduced MNIST dataset (1000 resized samples) and evaluated using EMNIST and MNIST-C for anomaly detection. Seven fully connected autoencoder architectures are first evaluated on a PC to explore the impact of model size and batch size on training time and anomaly detection performance. Selected models are then re-implemented in the C programming language and deployed on a single ESP32 device, achieving training times as short as 12 min, inference latency as low as 9 ms, and F1 scores of up to 0.87. Autoencoders are further tested on ten devices in a real-world federated learning experiment using Wi-Fi. We explore non-IID and IID data distribution scenarios: (1) digit-specialized devices and (2) partitioned datasets with varying content and anomaly types. The results show that small unmodified autoencoder models can be effectively trained and evaluated directly on low-power hardware. The best models achieve F1 scores of up to 0.87 in the standard IID setting and 0.86 in the extreme non-IID setting. Despite some clients being trained on corrupted datasets, federated aggregation proves resilient, maintaining high overall performance. The resource analysis shows that more than half of the models and all the training-related allocations fit entirely in internal RAM. These findings confirm the feasibility of local float32 training and collaborative anomaly detection on low-cost hardware, supporting scalable and privacy-preserving edge intelligence. Full article
(This article belongs to the Special Issue Intelligent IoT and Wireless Communication)
Show Figures

Figure 1

22 pages, 2678 KB  
Article
Federated Semi-Supervised Learning with Uniform Random and Lattice-Based Client Sampling
by Mei Zhang and Feng Yang
Entropy 2025, 27(8), 804; https://doi.org/10.3390/e27080804 - 28 Jul 2025
Viewed by 555
Abstract
Federated semi-supervised learning (Fed-SSL) has emerged as a powerful framework that leverages both labeled and unlabeled data distributed across clients. To reduce communication overhead, real-world deployments often adopt partial client participation, where only a subset of clients is selected in each round. However, [...] Read more.
Federated semi-supervised learning (Fed-SSL) has emerged as a powerful framework that leverages both labeled and unlabeled data distributed across clients. To reduce communication overhead, real-world deployments often adopt partial client participation, where only a subset of clients is selected in each round. However, under non-i.i.d. data distributions, the choice of client sampling strategy becomes critical, as it significantly affects training stability and final model performance. To address this challenge, we propose a novel federated averaging semi-supervised learning algorithm, called FedAvg-SSL, that considers two sampling approaches, uniform random sampling (standard Monte Carlo) and a structured lattice-based sampling, inspired by quasi-Monte Carlo (QMC) techniques, which ensures more balanced client participation through structured deterministic selection. On the client side, each selected participant alternates between updating the global model and refining the pseudo-label model using local data. We provide a rigorous convergence analysis, showing that FedAvg-SSL achieves a sublinear convergence rate with linear speedup. Extensive experiments not only validate our theoretical findings but also demonstrate the advantages of lattice-based sampling in federated learning, offering insights into the interplay among algorithm performance, client participation rates, local update steps, and sampling strategies. Full article
(This article belongs to the Special Issue Number Theoretic Methods in Statistics: Theory and Applications)
Show Figures

Figure 1

12 pages, 759 KB  
Article
Privacy-Preserving Byzantine-Tolerant Federated Learning Scheme in Vehicular Networks
by Shaohua Liu, Jiahui Hou and Gang Shen
Electronics 2025, 14(15), 3005; https://doi.org/10.3390/electronics14153005 - 28 Jul 2025
Viewed by 504
Abstract
With the rapid development of vehicular network technology, data sharing and collaborative training among vehicles have become key to enhancing the efficiency of intelligent transportation systems. However, the heterogeneity of data and potential Byzantine attacks cause the model to update in different directions [...] Read more.
With the rapid development of vehicular network technology, data sharing and collaborative training among vehicles have become key to enhancing the efficiency of intelligent transportation systems. However, the heterogeneity of data and potential Byzantine attacks cause the model to update in different directions during the iterative process, causing the boundary between benign and malicious gradients to shift continuously. To address these issues, this paper proposes a privacy-preserving Byzantine-tolerant federated learning scheme. Specifically, we design a gradient detection method based on median absolute deviation (MAD), which calculates MAD in each round to set a gradient anomaly detection threshold, thereby achieving precise identification and dynamic filtering of malicious gradients. Additionally, to protect vehicle privacy, we obfuscate uploaded parameters to prevent leakage during transmission. Finally, during the aggregation phase, malicious gradients are eliminated, and only benign gradients are selected to participate in the global model update, which improves the model accuracy. Experimental results on three datasets demonstrate that the proposed scheme effectively mitigates the impact of non-independent and identically distributed (non-IID) heterogeneity and Byzantine behaviors while maintaining low computational cost. Full article
(This article belongs to the Special Issue Cryptography in Internet of Things)
Show Figures

Figure 1

24 pages, 1530 KB  
Article
A Lightweight Robust Training Method for Defending Model Poisoning Attacks in Federated Learning Assisted UAV Networks
by Lucheng Chen, Weiwei Zhai, Xiangfeng Bu, Ming Sun and Chenglin Zhu
Drones 2025, 9(8), 528; https://doi.org/10.3390/drones9080528 - 28 Jul 2025
Viewed by 827
Abstract
The integration of unmanned aerial vehicles (UAVs) into next-generation wireless networks greatly enhances the flexibility and efficiency of communication and distributed computation for ground mobile devices. Federated learning (FL) provides a privacy-preserving paradigm for device collaboration but remains highly vulnerable to poisoning attacks [...] Read more.
The integration of unmanned aerial vehicles (UAVs) into next-generation wireless networks greatly enhances the flexibility and efficiency of communication and distributed computation for ground mobile devices. Federated learning (FL) provides a privacy-preserving paradigm for device collaboration but remains highly vulnerable to poisoning attacks and is further challenged by the resource constraints and heterogeneous data common to UAV-assisted systems. Existing robust aggregation and anomaly detection methods often degrade in efficiency and reliability under these realistic adversarial and non-IID settings. To bridge these gaps, we propose FedULite, a lightweight and robust federated learning framework specifically designed for UAV-assisted environments. FedULite features unsupervised local representation learning optimized for unlabeled, non-IID data. Moreover, FedULite leverages a robust, adaptive server-side aggregation strategy that uses cosine similarity-based update filtering and dimension-wise adaptive learning rates to neutralize sophisticated data and model poisoning attacks. Extensive experiments across diverse datasets and adversarial scenarios demonstrate that FedULite reduces the attack success rate (ASR) from over 90% in undefended scenarios to below 5%, while maintaining the main task accuracy loss within 2%. Moreover, it introduces negligible computational overhead compared to standard FedAvg, with approximately 7% additional training time. Full article
(This article belongs to the Special Issue IoT-Enabled UAV Networks for Secure Communication)
Show Figures

Figure 1

25 pages, 1169 KB  
Article
DPAO-PFL: Dynamic Parameter-Aware Optimization via Continual Learning for Personalized Federated Learning
by Jialu Tang, Yali Gao, Xiaoyong Li and Jia Jia
Electronics 2025, 14(15), 2945; https://doi.org/10.3390/electronics14152945 - 23 Jul 2025
Viewed by 486
Abstract
Federated learning (FL) enables multiple participants to collaboratively train models while efficiently mitigating the issue of data silos. However, large-scale heterogeneous data distributions result in inconsistent client objectives and catastrophic forgetting, leading to model bias and slow convergence. To address the challenges under [...] Read more.
Federated learning (FL) enables multiple participants to collaboratively train models while efficiently mitigating the issue of data silos. However, large-scale heterogeneous data distributions result in inconsistent client objectives and catastrophic forgetting, leading to model bias and slow convergence. To address the challenges under non-independent and identically distributed (non-IID) data, we propose DPAO-PFL, a Dynamic Parameter-Aware Optimization framework that leverages continual learning principles to improve Personalized Federated Learning under non-IID conditions. We decomposed the parameters into two components: local personalized parameters tailored to client characteristics, and global shared parameters that capture the accumulated marginal effects of parameter updates over historical rounds. Specifically, we leverage the Fisher information matrix to estimate parameter importance online, integrate the path sensitivity scores within a time-series sliding window to construct a dynamic regularization term, and adaptively adjust the constraint strength to mitigate the conflict overall tasks. We evaluate the effectiveness of DPAO-PFL through extensive experiments on several benchmarks under IID and non-IID data distributions. Comprehensive experimental results indicate that DPAO-PFL outperforms baselines with improvements from 5.41% to 30.42% in average classification accuracy. By decoupling model parameters and incorporating an adaptive regularization mechanism, DPAO-PFL effectively balances generalization and personalization. Furthermore, DPAO-PFL exhibits superior performance in convergence and collaborative optimization compared to state-of-the-art FL methods. Full article
Show Figures

Figure 1

Back to TopTop