Research into Robust Federated Learning Methods Driven by Heterogeneity Awareness
Abstract
1. Introduction
1.1. Related Work
- -
- Data Preprocessing-Based Approaches: Clustering-based techniques have been employed to group clients with similar data characteristics and assign customized models accordingly. Approaches such as Iterative Federated Class Averaging (IFCA), Federated Semi-Supervised (FedSEM), and Fine-Tuning and Clustering (FPFC) cluster clients based on model weights or task similarities, while Federated Model-agnostic Meta-learning with Optimized Objective Normalization (FedMOON) leverages contrastive learning to align feature spaces [10,11,12,13,14].
- -
- Model Robustness-Oriented Strategies: These strategies incorporate regularization techniques or meta-learning paradigms for enhanced robustness against distribution shifts during local training. Federated Proximal (FedProx) introduces a proximal term to constrain local updates, while Federated Contrastive (FedCL) and other personalized FL methods optimize local solutions tailored to individual clients [15,16,17,18].
- -
- Framework-Level Enhancements: Instead of exchanging model weights, these methods transmit distilled “knowledge” to alleviate divergence under non-IID settings. Notable examples include Federated Model Distillation (FedMD), the Federated Generative Network (FedGEN), and Federated Knowledge Distillation (FedKD), which use public or synthetic datasets to facilitate the distillation of knowledge [19,20,21].
1.2. Challenges
- -
- Heterogeneity Metric Design: A statistical heterogeneity measurement approach is developed by extracting the local mean and variance from latent feature distributions. A differentiable heterogeneity function is integrated into both training and aggregation to effectively quantify client-level discrepancies.
- -
- Multi-Loss Optimization Mechanism: A composite loss function is formulated by combining cross-entropy loss with heterogeneity-aware loss, feature center alignment, and L2 regularization, thereby enhancing convergence and generalization in non-IID environments.
- -
- Heterogeneity-Aware Weighted Aggregation: A dynamic aggregation strategy is implemented on the server to adjust client contributions based on heterogeneity levels, reducing the negative impact of high-variance clients and improving global model fidelity.
- -
- Empirical Validation Across Non-IID Scenarios: Extensive experiments conducted on datasets such as CIFAR-10 under Dirichlet-based heterogeneity (alpha = 0.1 or 0.5) validate the superiority of the proposed method in terms of accuracy, convergence stability, and interpretability. The results indicate enhanced global performance, with client adaptability maintained.
2. Preliminary Knowledge
Federated Learning
3. Heterogeneity-Aware Federated Optimization Framework
3.1. Overall Framework of Heterogeneity-Aware Federated Optimization
3.1.1. Modeling
3.1.2. Algorithm Implementation
Algorithm 1. Framework of Heterogeneity-Aware Federated Optimization |
Inputs: - Global model parameters: - Client datasets: Output: - Optimized global model: Client-Side Operations (Parallel Execution): 1. Global Feature Extraction For each batch : |
→ Output: Embedding matrix 2. Latent Information Extraction a. Compute feature statistics and (Equation (1)) b. Calculate heterogeneity metrics - Variance-based metric: (Equation (2)) - Manifold-based metric: (Equations (3) and (4)) c. Unified heterogeneity score (Equation (5)) 3. Multi-Loss Optimization a. Composite loss function (Equation (8)) b. Local model update: |
4. Transmit to Server Send tuple where Server-Side Operations 1. Aggregation Initialization For round to : - Receive client tuples: 2. Heterogeneity-Weighted Aggregation a. Compute effective weights (Equation (9)) b. Update global model: |
3. Model Distribution Broadcast to all clients Termination Return final global model after T rounds |
3.1.3. Simulation Procedure
- Local Model Training:
- Each client trains a local model using its own dataset.
- Heterogeneity Metrics: For each client, heterogeneity metrics are computed based on
- -
- Feature Statistics: Local feature embeddings are extracted, and statistical measures such as mean and variance are calculated.
- -
- Manifold Structures: High-dimensional features are projected into a low-dimensional manifold space using techniques like t-SNE, capturing geometric discrepancies in the latent space.
- Model and Metric Sharing:
- At the end of each federated round, each client shares
- -
- The updated model parameters.
- -
- The computed heterogeneity metrics (based on feature statistics and manifold structures) with the server.
- Global Model Update:
- -
- The server receives the model parameters and heterogeneity metrics from all the participating clients.
- -
- Heterogeneity-Aware Aggregation step: The server uses the received heterogeneity metrics to compute the aggregation weights for each client. Clients with a higher heterogeneity are assigned lower weights to mitigate their negative impact on the global model.
- -
- The server then aggregates the model parameters from all the clients based on the computed weights, updating the global model.
- Model Synchronization:
3.1.4. Verification Method
3.2. Mechanisms for Measuring Heterogeneity
3.2.1. Local Heterogeneity Based on Feature Statistics
3.2.2. High-Order Heterogeneity Modeling via Manifold Structure
3.2.3. Unified Heterogeneity Metric
3.3. Multi-Loss Design
- -
- Classification Loss (Cross-Entropy Loss). As the fundamental component in supervised learning, the classification loss evaluates the model’s prediction capability on the input data. It adopts the cross-entropy form to ensure the model learns discriminative features corresponding to local task labels.
- -
- Heterogeneity Loss. Based on the multi-scale fusion heterogeneity quantification mechanism proposed in Section 3.2, a corresponding heterogeneity loss term is introduced at the client-side training stage. This loss transforms the heterogeneity measurement into an optimizable training signal, guiding the model to suppress unstructured feature diffusions and structural drift. Specifically, this loss comprises two parts: the first is a statistical metric based on feature variance, utilizing per-dimension variance and the center deviation of intermediate embedding features to reflect the spatial dispersion of client features; the second is a nonlinear manifold-based metric, where t-SNE is applied to map features into a low-dimensional manifold space, quantifying their structural complexity and distributional divergence.
- -
- Feature Center Constraint. Considering the semantic feature drift among heterogeneous clients, a feature center constraint is introduced to enhance the consistency of feature representations. Specifically, in each training round, the mean vector of the embedded features from the current batch is computed, and the Euclidean distance between each sample’s feature and this mean is measured. This term encourages a more compact distribution of features within the semantic space, reduces the disruptive effect of feature dispersion on training stability, and improves the model’s discriminative capability after aggregation. It is defined as
- -
- Regularization Term (L2 Regularization). To prevent the model from overfitting to local data distributions, an L2 regularization term is imposed on model parameters as a structural constraint. By penalizing the L2 norm of all network weight parameters, this term reduces the model’s complexity and enhances it generalization capability during global aggregation. This is particularly crucial in non-IID settings, as it mitigates local performance shifts caused by overfitting. Formally,
3.4. Heterogeneity-Aware Weighted Aggregation Strategy
3.5. Summary
4. Experiment
4.1. Experimental Setup
4.1.1. Dataset Settings
4.1.2. Non-IID Simulation
4.2. Implementation Details
- Federated Learning Framework:
- -
- Flower Framework: We implement the proposed heterogeneity-aware optimization method using the Flower framework, which allows for the easy integration of federated learning components, including client–server communication, model aggregation, and distributed training.
- -
- Client-Side Setup: Each client is assigned a local dataset, which is non-IID to simulate realistic conditions in federated learning. Clients independently perform local model training on their datasets and periodically upload their model updates and heterogeneity metrics to the server.
- Model Architecture:
- A Convolutional Neural Network (CNN) with a simple and lightweight architecture is employed across all the clients for a fair comparison of methods. Specifically, the model consists of three convolutional layers, followed by a fully connected layer to output the class predictions. This architecture ensures that the focus remains on the aggregation strategy and heterogeneity-aware optimization rather than model complexity. The layer details are as follows:
- -
- Convolutional Layers: The model uses three convolutional layers with ReLU activations and max-pooling after each convolution.
- -
- Fully Connected Layer: A fully connected layer is used to output the final class prediction, followed by a softmax activation to generate probabilities for classification tasks.
- 3.
- Federated Learning Procedure:
- -
- Federated Rounds: The system operates over a series of federated rounds. In each round, all clients perform local training on their datasets, calculate their heterogeneity metrics (based on feature statistics and manifold structures), and upload both the model parameters and heterogeneity scores to the server.
- -
- Server-Side Aggregation: The server aggregates the model parameters from all the clients using the heterogeneity-aware weighted aggregation strategy described in Section 3.4. This aggregation method adjusts the contribution of each client’s update based on its local heterogeneity, reducing the impact of clients with highly imbalanced data distributions.
- -
- Global Model Update: After receiving updates from all the clients, the server combines the model parameters using the weighted strategy, updating the global model. The updated global model is then sent back to the clients for the next round of local training.
- 4.
- Client Participation:
- -
- All Clients Participate in Every Round: In each federated round, all clients participate in the model update process. This ensures that the global model benefits from diverse data distributions and that the model learns from all the clients’ datasets over time.
- -
- Communication Efficiency: The communication overhead is minimized by uploading only the model parameters and heterogeneity metrics, rather than the entire dataset. This ensures the privacy of client data while enabling effective federated learning.
- 5.
- Baselines:
4.3. Comparative Experiment
- -
- FedAvg: A basic federated averaging algorithm that does not consider data heterogeneity among clients.
- -
- FedProx: Alleviates the model drift caused by heterogeneity by incorporating an additional regularization term in the local optimization process.
- -
- FedSAM: Introduces gradient perturbation during optimization to enhance models’ robustness against data fluctuations and distributional differences.
- -
- FedMOON: Enhances client feature consistency through contrastive learning, effectively combating client drift.
4.4. Ablation Experiment
- Method1: Heterogeneity Loss: Removes the heterogeneity-aware loss mechanism.
- Method2: Center Constraint: Removes the feature center constraint term.
- Method3: Weighted Aggregation: Replaces heterogeneity-weighted aggregation with standard averaging.
- Method4: The complete method with all modules included.
4.5. Practical Application Expansion and Critical Analysis
4.5.1. Practical Application Scenarios
- -
- Medical imaging combined diagnosis: In a cross-hospital brain MRI collaboration scenario, the heterogeneity perception mechanism of HAD can coordinate the distribution differences between institutions through feature statistics and manifold analysis and effectively overcome the model bias problem caused by FedAvg’s ignorance of data heterogeneity. Compared with FedProx, which relies on a single regularization constraint, the multi-scale modeling of HAD solves the feature space shift more comprehensively. Compared with the contrastive learning mechanism of FedMOON, which requires a frequent exchange of feature vectors, its weighted aggregation significantly reduces the communication burden and provides an efficient solution for distributed medical diagnosis.
- -
- Anomaly detection in the industrial IoT: In the face of differences in operating conditions in multi-factory equipment monitoring, the feature center constraint of HAD maintains cross-domain feature consistency and avoids the amplification effect of FedSAM gradient disturbance on sensor noise. Its heterogeneous weighted aggregation mechanism accurately screens effective client updates, significantly alleviates the convergence delay problem caused by invalid nodes, and provides stable support for industrial equipment condition monitoring in strong distribution difference environments.
- -
- Modeling of financial cross-domain risk control: In inter-bank anti-fraud cooperation, the manifold heterogeneity measure of HAD deeply captures regional nonlinear patterns and breaks through the limitations of FedMOON in complex distribution modeling. By fusing feature center constraint and multi-loss design, the modeling stability of the proposed method is significantly better than that of FedSAM’s gradient perturbation strategy, and it realizes the cross-regional collaborative perception of fraud features and provides a robustness of distribution for financial risk control scenarios.
4.5.2. Methodological Critical Review
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Jarwan, A.; Ibnkahla, M. Edge-based federated deep reinforcement learning for IoT traffic management. IEEE Internet Things J. 2022, 10, 3799–3813. [Google Scholar] [CrossRef]
- Deng, W.; Chen, X.; Li, X.; Zhao, H. Adaptive federated learning with negative inner product aggregation. IEEE Internet Things J. 2024, 11, 6570–6581. [Google Scholar] [CrossRef]
- Deng, W.; Li, K.; Zhao, H. A flight arrival time prediction method based on cluster clustering-based modular with deep neural network. IEEE Trans. Intell. Transp. Syst. 2023, 25, 6238–6247. [Google Scholar] [CrossRef]
- Li, X.; Zhao, H.; Deng, W. BFOD: Blockchain-based privacy protection and security sharing scheme of flight operation data. IEEE Internet Things J. 2023, 11, 3392–3401. [Google Scholar] [CrossRef]
- Hu, F.; Zhou, W.; Liao, K.; Li, H.; Tong, D. Toward federated learning models resistant to adversarial attacks. IEEE Internet Things J. 2023, 10, 16917–16930. [Google Scholar] [CrossRef]
- Yan, Z.; Yang, H.; Guo, D.; Lin, Y. Improving airport arrival flow prediction considering heterogeneous and dynamic network dependencies. Inf. Fusion 2023, 100, 101924. [Google Scholar] [CrossRef]
- Wang, S.; Luo, X.; Qian, Y.; Zhu, Y.; Chen, K.; Chen, Q.; Xin, B.; Yang, W. Shuffle differential private data aggregation for random population. IEEE Trans. Parallel Distrib. Syst. 2023, 34, 1667–1681. [Google Scholar] [CrossRef]
- Chang, Y.; Zhang, K.; Gong, J.; Qian, H. Privacy-preserving federated learning via functional encryption, revisited. IEEE Trans. Inf. Forensics Secur. 2023, 18, 1855–1869. [Google Scholar] [CrossRef]
- Zhang, L.; Xu, J.; Vijayakumar, P.; Sharma, P.K.; Ghosh, U. Homomorphic encryption-based privacy-preserving federated learning in IoT-enabled healthcare system. IEEE Trans. Netw. Sci. Eng. 2022, 10, 2864–2880. [Google Scholar] [CrossRef]
- Yu, X.; Liu, Z.; Sun, Y.; Wang, W. Clustered federated learning for heterogeneous data (student abstract). In Proceedings of the 37th AAAI Conference on Artificial Intelligence, Virtual, USA, 7–14 February 2023; pp. 16378–16379. [Google Scholar]
- Ruan, Y.; Joe-Wong, C. Fedsoft: Soft clustered federated learning with proximal local updating. In Proceedings of the 36th AAAI Conference on Artificial Intelligence, Virtual, Canada, 22 February–1 March 2022; pp. 8124–8131. [Google Scholar]
- Diao, Y.; Li, Q.; He, B. Towards addressing label skews in one-shot federated learning. In Proceedings of the 11th International Conference on Learning Representations (ICLR), Kigali, Rwanda, 1–5 May 2023. [Google Scholar]
- Nagalapatti, L.; Mittal, R.S.; Narayanam, R. Is your data relevant?: Dynamic selection of relevant data for federated learning. In Proceedings of the 36th AAAI Conference on Artificial Intelligence, Virtual, Canada, 22 February–1 March 2022; pp. 7859–7867. [Google Scholar]
- Dai, Y.; Chen, Z.; Li, J.; Heinecke, S.; Sun, L.; Xu, R. Tackling data heterogeneity in federated learning with class prototypes. In Proceedings of the 37th AAAI Conference on Artificial Intelligence, Virtual, USA, 7–14 February 2023; pp. 7314–7322. [Google Scholar]
- Zhang, F.; Li, Y.; Lin, S.; Shao, Y.; Jiang, J.; Liu, X. Large sparse kernels for federated learning. In Proceedings of the ICLR, Kigali, Rwanda, 31 May 2023; Available online: https://openreview.net/forum?id=ZCv4E1unfJP (accessed on 5 June 2024).
- Duan, J.H.; Li, W.; Lu, S. FedDNA: Federated learning with decoupled normalization-layer aggregation for non-IID data. In Proceedings of the Joint European Conference on Machine Learning and Knowledge Discovery in Databases, Bilbao, Spain, 13–17 September 2021; pp. 722–737. [Google Scholar]
- Zhang, X.; Hong, M.; Dhople, S.; Yin, W.; Liu, Y. Fedpd: A federated learning framework with adaptivity to non-iid data. IEEE Trans. Signal Process. 2021, 69, 6055–6070. [Google Scholar] [CrossRef]
- Vahidian, S.; Morafah, M.; Lin, B. Personalized federated learning by structured and unstructured pruning under data heterogeneity. In Proceedings of the 41st IEEE International Conference on Distributed Computing Systems Workshops, Washington, DC, USA, 7–10 July 2021; pp. 27–34. [Google Scholar]
- Zhu, Z.; Hong, J.; Zhou, J. Data-free knowledge distillation for heterogeneous federated learning. In Proceedings of the 38th International Conference on Machine Learning, Virtual, 18–24 July 2021; pp. 12878–12889. [Google Scholar]
- Wu, C.; Wu, F.; Lyu, L.; Huang, Y.; Xie, X. Communication-efficient federated learning via knowledge distillation. Nat. Commun. 2022, 13, 2032. [Google Scholar] [CrossRef] [PubMed]
- Zhao, J.; Zhu, X.; Wang, J.; Xiao, J. Efficient client contribution evaluation for horizontal federated learning. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, Toronto, ON, Canada, 6–11 June 2021; pp. 3060–3064. [Google Scholar]
- Yuan, X.; Li, P. On convergence of FedProx: Local dissimilarity invariant bounds, non-smoothness and beyond. In Proceedings of the 36th Conference on Neural Information Processing Systems, New Orleans, LA, USA, 28 November–9 December 2022; pp. 10752–10765. [Google Scholar]
- Abdelmoniem, A.M.; Ho, C.Y.; Papageorgiou, P.; Canini, M. Empirical analysis of federated learning in heterogeneous environments. In Proceedings of the 2nd European Workshop on Machine Learning and Systems, Rennes, France, 5 April 2022; pp. 1–9. [Google Scholar]
- Li, B.; Peng, Z.; Li, Y.; Xu, M.; Chen, S.; Ji, B.; Shen, C. Neighborhood and Global Perturbations Supported SAM in Federated Learning: From Local Tweaks To Global Awareness. arXiv, 2024; arXiv:2408.14144. [Google Scholar]
- Wang, Y.; Fu, H.; Kanagavelu, R.; Wei, Q.; Liu, Y.; Goh, R.S.M. An Aggregation-free Federated Learning for Tackling Data Heterogeneity. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 16–22 June 2024; pp. 26233–26242. [Google Scholar]
- Husnoo, M.A.; Anwar, A.; Hosseinzadeh, N.; Islam, S.N.; Mahmood, A.N.; Doss, R. FedRep: Towards horizontal federated load forecasting for retail energy providers. In Proceedings of the IEEE Asia-Pacific Power and Energy Engineering Conference, Melbourne, Australia, 20–23 November 2022; pp. 1–6. [Google Scholar]
- Ek, S.; Portet, F.; Lalanda, P.; Vega, G. Evaluation of federated learning aggregation algorithms: Application to human activity recognition. In Proceedings of the ACM International Joint Conference on Pervasive and Ubiquitous Computing, Virtual, 12–17 September 2020; pp. 638–643. [Google Scholar]
- Guo, Z.; Liu, A.; Dong, R. Research on Optical Mineral Image Recognition Based on Federated Learning. In Proceedings of the International Conference on Next Generation Data-Driven Networks, Xi’an, China, 15–17 March 2024; pp. 364–369. [Google Scholar]
- Zhou, X.; Yang, Q.; Zheng, X.; Liang, W.; Wang, K.I.-K.; Ma, J.; Pan, Y.; Jin, Q. Personalized Federated Learning with Model-Contrastive Learning for Multi-modal User Modeling in Human-centric Metaverse. IEEE J. Sel. Areas Commun. 2024, 42, 817–831. [Google Scholar] [CrossRef]
- Wang, J.; Liu, Q.; Liang, H.; Joshi, G.; Poor, H.V. Tackling the objective inconsistency problem in heterogeneous federated optimization. In Proceedings of the 34th Conference on Neural Information Processing Systems, Virtual, Canada, 6–12 December 2020; pp. 7611–7623. [Google Scholar]
- Wang, Z.; Wang, Z.; Fan, X.; Wang, C. Federated Learning with Domain Shift Eraser. In Proceedings of the Computer Vision and Pattern Recognition Conference, Nashville, TN, USA, 11–15 June 2025; pp. 4978–4987. [Google Scholar]
- Lin, Z.; Wei, W.; Chen, Z.; Lam, C.-T.; Chen, X.; Gao, Y.; Luo, J. Hierarchical Split Federated Learning: Convergence Analysis and System Optimization. IEEE Trans. Mob. Comput. 2025, 1–16. [Google Scholar] [CrossRef]
Dataset Name | Number of Classes | Image Count | Image Size | Train/Test Split | Sample Type | Size |
---|---|---|---|---|---|---|
CIFAR-10 | 10 | 60,000 | 32 × 32 | 50,000/10,000 | Natural images | 177 MB |
SVHN Cropped | 10 | 99,289 | 32 × 32 | 73,257/26,032 | Street view digits | 1.32 GB |
MNIST | 10 | 70,000 | 28 × 28 | 60,000/10,000 | Handwritten digits | 11.6 MB |
NotMNIST | 10 | 145,000 | 28 × 28 | 120,000/25,000 | Alphabet images | 25.2 MB |
Method | CIFAR-10 | SVHN | MINIST | NotMINIST | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Accuracy | Precision | Recall | Accuracy | Precision | Recall | Accuracy | Precision | Recall | Accuracy | Precision | Recall | |
FedAvg | 55.67 | 55.03 | 56.23 | 88.07 | 88.78 | 87.54 | 96.71 | 96.16 | 97.43 | 89.95 | 89.20 | 90.43 |
FedProx | 56.44 | 56.77 | 55.89 | 76.76 | 77.26 | 76.04 | 93.51 | 93.15 | 94.14 | 86.25 | 86.56 | 85.79 |
FedSAM | 63.50 | 64.22 | 62.78 | 91.82 | 91.40 | 92.37 | 95.77 | 95.38 | 96.28 | 91.79 | 91.09 | 92.31 |
FedMOON | 66.54 | 66.19 | 67.33 | 92.71 | 93.04 | 92.21 | 98.28 | 98.88 | 97.91 | 92.44 | 92.00 | 93.01 |
FedHAD | 67.69 | 67.01 | 68.32 | 93.90 | 94.48 | 93.22 | 98.62 | 98.94 | 98.91 | 93.73 | 94.30 | 93.10 |
Method | CIFAR-10 | SVHN | MINIST | NotMINIST | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Accuracy | Precision | Recall | Accuracy | Precision | Recall | Accuracy | Precision | Recall | Accuracy | Precision | Recall | |
FedAvg | 47.63 | 47.12 | 48.13 | 69.65 | 69.08 | 70.30 | 89.16 | 88.95 | 89.89 | 82.33 | 82.91 | 81.59 |
FedProx | 47.05 | 46.65 | 47.52 | 46.73 | 47.47 | 46.17 | 67.58 | 66.13 | 66.95 | 61.07 | 60.60 | 61.94 |
FedSAM | 45.94 | 45.83 | 46.57 | 76.81 | 77.34 | 76.11 | 87.67 | 87.87 | 87.41 | 88.67 | 88.90 | 88.21 |
FedMOON | 50.64 | 51.42 | 51.03 | 83.37 | 83.01 | 83.45 | 96.30 | 96.71 | 95.91 | 92.39 | 92.89 | 93.59 |
FedHAD | 52.71 | 52.03 | 53.33 | 85.19 | 84.36 | 85.60 | 97.41 | 97.98 | 96.82 | 92.86 | 92.39 | 93.74 |
Method | Description | Heterogeneity Loss | Center Constraint | Weighted Aggregation | ACC (%) | Loss |
---|---|---|---|---|---|---|
Method1 | Without heterogeneity loss | ✗ | ✗ | ✓ | 47.63 | 1.6466 |
Method2 | Without feature center constraint | ✓ | ✗ | ✓ | 52.14 | 1.3859 |
Method3 | Replacing weighted aggregation with simple mean | ✓ | ✗ | ✗ | 51.31 | 1.4412 |
Method4 | Full method (all modules included) | ✓ | ✓ | ✓ | 53.74 | 1.3683 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Song, J.; Zheng, Z.; Li, A.; Xia, Z.; Liu, Y. Research into Robust Federated Learning Methods Driven by Heterogeneity Awareness. Appl. Sci. 2025, 15, 7843. https://doi.org/10.3390/app15147843
Song J, Zheng Z, Li A, Xia Z, Liu Y. Research into Robust Federated Learning Methods Driven by Heterogeneity Awareness. Applied Sciences. 2025; 15(14):7843. https://doi.org/10.3390/app15147843
Chicago/Turabian StyleSong, Junhui, Zhangqi Zheng, Afei Li, Zhixin Xia, and Yongshan Liu. 2025. "Research into Robust Federated Learning Methods Driven by Heterogeneity Awareness" Applied Sciences 15, no. 14: 7843. https://doi.org/10.3390/app15147843
APA StyleSong, J., Zheng, Z., Li, A., Xia, Z., & Liu, Y. (2025). Research into Robust Federated Learning Methods Driven by Heterogeneity Awareness. Applied Sciences, 15(14), 7843. https://doi.org/10.3390/app15147843