- Article
Differentially Private Federated Learning with Adaptive Clipping Thresholds
- Jianhua Liu,
- Yanglin Zeng and
- Yao Tong
- + 2 authors
Under non-independent and identically distributed (Non-IID) conditions, significant variations exist in local model updates across clients and training phases during the collaborative modeling process of differential privacy federated learning (DP-FL). Fixed clipping thresholds and noise scales struggle to accommodate these diverse update differences, leading to mismatches between local update intensity and noise perturbations. This imbalance results in data privacy leaks and suboptimal model accuracy. To address this, we propose a differential privacy federated learning method based on adaptive clipping thresholds. During each communication round, the server adaptively estimates the global clipping threshold for that round using a quantile strategy based on the statistical distribution of client update norms. Simultaneously, clients adaptively adjust their noise scales according to the clipping threshold magnitude, enabling dynamic matching of clipping intensity and noise perturbation across training phases and clients. The novelty of this work lies in a quantile-driven, round-wise global clipping adaptation that synchronizes sensitivity bounding and noise calibration across heterogeneous clients, enabling improved privacy–utility behavior under a fixed privacy accountant. Using experimental results on the rail damage datasets, our proposed method slightly reduces the attacker’s MIA ROC-AUC by 0.0033 and 0.0080 compared with Fed-DPA and DP-FedAvg, respectively, indicating stronger privacy protection, while improving average accuracy by 1.55% and 3.35% and achieving faster, more stable convergence. We further validate its effectiveness on CIFAR-10 under non-IID partitions.
14 March 2026







