Class-Hidden Client-Side Watermarking in Federated Learning
Abstract
:1. Introduction
- •
- This study proposes a watermarking method that ensures watermark labels are independent of the primary task labels, thereby avoiding interference between watermark information and task outputs. This independence ensures that watermark embedding does not affect the performance of the model’s primary task, guaranteeing high fidelity.
- •
- By modifying the model’s structure, the watermark can be effectively embedded into different types of models without relying on any specific class, thereby enhancing the adaptability of the watermark.
- •
- Various experiments demonstrate that the proposed method has strong robustness against pruning and fine-tuning attacks.
2. Related Works
2.1. Centralized and Federated Learning Watermarking Schemes
2.2. Federated Learning Watermarking Schemes
3. Threat Model and Watermark Requirements
3.1. Threat Model
3.2. Watermarking Scenario and Requirements
4. Proposed Method
4.1. Watermark Dataset Generation
4.2. Watermark Embedding
4.3. Watermark Verification
5. Experiments and Analysis
5.1. Experiment Settings
5.1.1. Dataset and Model Settings
5.1.2. Watermark Dataset Setting
5.2. Performance Analysis Under IID
5.2.1. Fidelity
5.2.2. Detectability
5.2.3. Robustness
5.2.4. Secrecy
5.2.5. False Negative Ratio
5.2.6. Efficiency Evaluation
5.3. Performance Analysis Under Non-IID
5.3.1. Robustness Against Fine-Tuning Attack
5.3.2. Robustness Against Pruning Attack
5.3.3. Robustness Against Overwriting Attack
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Data Availability Statement
Conflicts of Interest
References
- McMahan, B.; Moore, E.; Ramage, D.; Hampson, S.; y Arcas, B.A. Communication-Efficient Learning of Deep Networks from Decentralized Data. PMLR 2017, 54, 1273–1282. [Google Scholar]
- Wu, D.; Wang, N.; Zhang, J.; Zhang, Y.; Xiang, Y.; Gao, L. A Blockchain-based Multi-layer Decentralized Framework for Robust Federated Learning. In Proceedings of the 2022 International Joint Conference on Neural Networks (IJCNN), Padua, Italy, 18–23 July 2022; pp. 1–8. [Google Scholar] [CrossRef]
- Salem, A.; Zhang, Y.; Humbert, M.; Berrang, P.; Fritz, M.; Backes, M. ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models. In Proceedings of the NDSS, San Diego, CA, USA, 24–27 February 2019; pp. 24–27. [Google Scholar]
- Liu, L.; Wang, Y.; Liu, G.; Peng, K.; Wang, C. Membership Inference Attacks Against Machine Learning Models via Prediction Sensitivity. IEEE Trans. 2023, 20, 2341–2347. [Google Scholar] [CrossRef]
- Zhu, C.; Zhang, J.; Sun, X.; Chen, B.; Meng, W. ADFL: Defending backdoor attacks in federated learning via adversarial distillation. Comput. Secur. 2023, 132, 103366. [Google Scholar] [CrossRef]
- Oh, S.J.; Schiele, B.; Fritz, M. Towards Reverse-Engineering Black-Box Neural Networks. In Explainable AI: Interpreting, Explaining and Visualizing Deep Learning; Springer: Cham, Switzerland, 2019; pp. 121–144. [Google Scholar] [CrossRef]
- Oh, S.J.; Augustin, M.; Schiele, B.; Fritz, M. Towards Reverse-Engineering Black-Box Neural Networks. arXiv 2018, arXiv:1711.01768. [Google Scholar] [CrossRef]
- Orekondy, T.; Schiele, B.; Fritz, M. Knockoff Nets: Stealing Functionality of Black-Box Models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 4954–4963. [Google Scholar] [CrossRef]
- Wang, B.; Gong, N.Z. Stealing Hyperparameters in Machine Learning. In Proceedings of the 2018 IEEE Symposium on Security and Privacy (SP), San Francisco, CA, USA, 20–24 May 2018; pp. 36–52. [Google Scholar]
- Yan, M.; Fletcher, C.W.; Torrellas, J. Cache Telepathy: Leveraging Shared Resource Attacks to Learn DNN Architectures. In Proceedings of the USENIX, Online, 12–14 August 2020; pp. 2003–2020. [Google Scholar] [CrossRef]
- Jagielski, M.; Carlini, N.; Berthelot, D.; Kurakin, A.; Papernot, N. High Accuracy and High Fidelity Extraction of Neural Networks. In Proceedings of the USENIX, Online, 12–14 August 2020; pp. 1345–1362. [Google Scholar]
- Zhu, Y.; Cheng, Y.; Zhou, H.; Lu, Y. Hermes Attack: Steal DNN Models with Lossless Inference Accuracy. In Proceedings of the USENIX, Virtual Event, 14–16 July 2021; pp. 1973–1988. [Google Scholar]
- Wang, J.; Wu, H.; Zhang, X.; Yao, Y. Watermarking in Deep Neural Networks via Error Back-propagation. Soc. Imaging Sci. Technol. 2020, 32, 1–9. [Google Scholar] [CrossRef]
- Namba, R.; Sakuma, J. Robust Watermarking of Neural Network with Exponential Weighting. In Proceedings of the ACM, Auckland, New Zealand, 9–12 July 2019; pp. 228–240. [Google Scholar] [CrossRef]
- Ong, D.S.; Chan, C.S.; Ng, K.W.; Fan, L.; Yang, Q. Protecting Intellectual Property of Generative Adversarial Networks From Ambiguity Attacks. In Proceedings of the IEEE, CVPR, Nashville, TN, USA, 19–25 June 2021; pp. 3630–3639. [Google Scholar] [CrossRef]
- Chen, X.; Chen, T.; Zhang, Z.; Wang, Z. You are caught stealing my winning lottery ticket! Making a lottery ticket claim its ownership. In Proceedings of the NeurIPS, Online, 6–14 December 2021; pp. 1780–1791. [Google Scholar] [CrossRef]
- Lim, J.H.; Chan, C.S.; Ng, K.W.; Fan, L.; Yang, Q. Protect, show, attend and tell: Empowering image captioning models with ownership protection. Pattern Recognit. 2022, 122, 108285. [Google Scholar] [CrossRef]
- April Pyone, M.M.; Kiya, H. Piracy-Resistant DNN Watermarking by Block-Wise Image Transformation with Secret Key. In Proceedings of the ACM, Virtual Event, 22–25 June 2021; pp. 159–164. [Google Scholar] [CrossRef]
- Fan, L.; Ng, K.W.; Chan, C.S.; Yang, Q. DeepIPR: Deep Neural Network Ownership Verification With Passports. IEEE Trans 2022, 44, 6122–6139. [Google Scholar] [CrossRef] [PubMed]
- Szyller, S.; Atli, B.G.; Marchal, S.; Asokan, N. DAWN: Dynamic Adversarial Watermarking of Neural Networks. In Proceedings of the ACM, Virtual Event, 22–25 June 2021; pp. 4417–4425. [Google Scholar] [CrossRef]
- Fkirin, A.; Attiya, G.; El-Sayed, A.; Shouman, M.A. Copyright protection of deep neural network models using digital watermarking: A comparative study. Multim. Tools Appl. 2022, 81, 15961–15975. [Google Scholar] [CrossRef] [PubMed]
- Han, B.; Jhaveri, R.H.; Wang, H.; Qiao, D.; Du, J. Application of Robust Zero-Watermarking Scheme Based on Federated Learning for Securing the Healthcare Data. IEEE J. 2023, 27, 804–813. [Google Scholar] [CrossRef] [PubMed]
- Xue, M.; Sun, S.; Zhang, Y.; Wang, J.; Liu, W. Active intellectual property protection for deep neural networks through stealthy backdoor and users’ identities authentication. Appl. Intell. 2022, 52, 16497–16511. [Google Scholar] [CrossRef]
- Jia, H.; Choquette-Choo, C.A.; Chandrasekaran, V.; Papernot, N. Entangled Watermarks as a Defense against Model Extraction. In Proceedings of the USENIX, Virtual Event, 14–16 July 2021; pp. 1937–1954. [Google Scholar] [CrossRef]
- Rouhani, B.D.; Chen, H.; Koushanfar, F. DeepSigns: An End-to-End Watermarking Framework for Ownership Protection of Deep Neural Networks. In Proceedings of the ACM ASPLOS, Providence, RI, USA, 13–17 April 2019; pp. 485–497. [Google Scholar] [CrossRef]
- Shao, S.; Yang, W.; Gu, H.; Lou, J.; Qin, Z.; Fan, L.; Yang, Q.; Ren, K. FedTracker: Furnishing Ownership Verification and Traceability for Federated Learning Model. arXiv 2022, arXiv:2211.07160. [Google Scholar] [CrossRef]
- Chen, J.; Li, M.; Cheng, Y.; Zheng, H. FedRight: An effective model copyright protection for federated learning. Comput. Secur. 2023, 135, 103504. [Google Scholar] [CrossRef]
- Li, B.; Fan, L.; Gu, H.; Li, J.; Yang, Q. FedIPR: Ownership Verification for Federated Deep Neural Network Models. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 4521–4536. [Google Scholar] [CrossRef] [PubMed]
- Liang, J.; Wang, R. FedCIP: Federated Client Intellectual Property Protection with Traitor Tracking. arXiv 2023, arXiv:2306.01356. [Google Scholar] [CrossRef]
- Yin, Z.; Yin, H.; Zhang, X. Neural Network Fragile watermarking With No Model Performance Degradation. In Proceedings of the IEEE ICIP, Bordeaux, France, 16–19 October 2022; pp. 3958–3962. [Google Scholar] [CrossRef]
- Uchida, Y.; Nagai, Y.; Sakazawa, S.; Satoh, S. Embedding Watermarks into Deep Neural Networks. In Proceedings of the ACM ICMR, Bucharest, Romania, 6–9 June 2017; pp. 269–277. [Google Scholar] [CrossRef]
- Liu, H.; Weng, Z.; Zhu, Y. Watermarking Deep Neural Networks with Greedy Residuals. ICML 2021, 139, 6978–6988. [Google Scholar]
- Yan, Y.; Pan, X.; Zhang, M.; Yang, M. Rethinking White-Box Watermarks on Deep Learning Models under Neural Structural Obfuscation. In Proceedings of the USENIX, Anaheim, CA, USA, 9–11 August 2023; Available online: https://www.usenix.org/conference/usenixsecurity23/presentation/yan (accessed on 18 March 2024).
- Zhang, J.; Chen, D.; Liao, J.; Zhang, W.; Hua, G.; Yu, N. Passport-aware Normalization for Deep Model Protection. In Proceedings of the NeurIPS, Virtual, 6–12 December 2020. [Google Scholar] [CrossRef]
- Fan, L.; Ng, K.W.; Chan, C.S. Rethinking Deep Neural Network Ownership Verification: Embedding Passports to Defeat Ambiguity Attacks. In Proceedings of the NeurIPS, Vancouver, BC, Canada, 8–14 December 2019; pp. 4716–4725. [Google Scholar]
- Adi, Y.; Baum, C.; Cissé, M.; Pinkas, B.; Keshet, J. Turning Your Weakness Into a Strength: Watermarking Deep Neural Networks by Backdooring. In Proceedings of the USENIX, Baltimore, MD, USA, 15–17 August 2018; pp. 1615–1631. [Google Scholar]
- Li, M.; Zhong, Q.; Zhang, L.Y.; Du, Y.; Zhang, J.; Xiang, Y. Protecting the Intellectual Property of Deep Neural Networks with Watermarking: The Frequency Domain Approach. In Proceedings of the 2020 IEEE 19th International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom), Guangzhou, China, 29 December–1 January 2020; pp. 402–409. [Google Scholar] [CrossRef]
- Tekgul, B.G.A.; Xia, Y.; Marchal, S.; Asokan, N. WAFFLE: Watermarking in Federated Learning. In Proceedings of the IEEE SRDS, Chicago, IL, USA, 20–23 September 2021; pp. 310–320. [Google Scholar] [CrossRef]
- Yang, W.; Shao, S.; Yang, Y.; Liu, X.; Liu, X.; Xia, Z.; Schaefer, G.; Fang, H. Watermarking in Secure Federated Learning: A Verification Framework Based on Client-Side Backdooring. ACM Trans. Intell. Syst. Technol. 2024, 15, 5:1–5:25. [Google Scholar] [CrossRef]
- Li, F.Q.; Wang, S.L.; Liew, A.W.C. Towards practical watermark for deep neural networks in federated learning. arXiv 2021, arXiv:2105.03167. [Google Scholar] [CrossRef]
- Becker, G. Merkle signature schemes, merkle trees and their cryptanalysis. Ruhr-Univ. Bochum Tech. Rep. 2008, 12, 19. [Google Scholar]
- Shao, S.; Yang, W.; Gu, H.; Qin, Z.; Fan, L.; Yang, Q. FedTracker: Furnishing Ownership Verification and Traceability for Federated Learning Model. IEEE Trans. Dependable Secur. Comput. 2024, 22, 114–131. [Google Scholar] [CrossRef]
- Liu, X.; Shao, S.; Yang, Y.; Wu, K.; Yang, W.; Fang, H. Secure Federated Learning Model Verification: A Client-side Backdoor Triggered Watermarking Scheme. In Proceedings of the IEEE SMC, Melbourne, Australia, 17–20 October 2021; pp. 2414–2419. [Google Scholar] [CrossRef]
- Bagdasaryan, E.; Veit, A.; Hua, Y.; Estrin, D.; Shmatikov, V. How To Backdoor Federated Learning. PMLR 2020, 108, 2938–2948. [Google Scholar]
- Paszke, A.; Gross, S.; Chintala, S.; Chanan, G.; Yang, E.; DeVito, Z.; Lin, Z.; Desmaison, A.; Antiga, L.; Lerer, A. Automatic Differentiation in PyTorch. 2017. Available online: https://api.semanticscholar.org/CorpusID:40027675 (accessed on 18 March 2024).
- LeCun, Y.; Cortes, C. Mnist Handwritten Digit Database; ATT Labs: New York, NY, USA, 2010. [Google Scholar]
- Alex Krizhevsky, G.H. Learning Multiple Layers of Features from Tiny Images; Technical Report; University of Toronto: Toronto, ON, Canada, 2009. [Google Scholar]
- Krizhevsky, A.; Nair, V.; Hinton, G. CIFAR-100 (Canadian Institute for Advanced Research). 2009. Available online: https://www.cs.toronto.edu/~kriz/cifar.html (accessed on 18 March 2024).
- Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. In Proceedings of the ICLR, San Diego, CA, USA, 7–9 May 2015; Available online: http://arxiv.org/abs/1409.1556 (accessed on 18 March 2024).
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE CVPR, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 770–778. [Google Scholar] [CrossRef]
- Nie, H.; Lu, S. FedCRMW: Federated model ownership verification with compression-resistant model watermarking. Expert Syst. Appl. 2024, 249, 123776. [Google Scholar] [CrossRef]
Datasets | Evaluation Metrics | Methods | Pruning Rate | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
10% | 20% | 30% | 40% | 50% | 60% | 70% | 80% | 90% | |||
cifar-10 | Original Task Acc (%) | FedIPR | 84.15 | 84.08 | 83.82 | 83.77 | 82.11 | 67.18 | 46.75 | 21.77 | 13.24 |
WAFFLE | 80.33 | 79.65 | 79.78 | 76.61 | 71.35 | 30.13 | 25.87 | 15.98 | 9.74 | ||
FedCRMW | 85.32 | 85.29 | 85.25 | 85.17 | 84.35 | 15.02 | 10.05 | 10.05 | 10.00 | ||
FedTracker | 85.12 | 84.97 | 84.88 | 84.86 | 76.33 | 25.05 | 15.81 | 11.83 | 6.17 | ||
Proposed | 84.95 | 84.90 | 84.82 | 84.80 | 84.64 | 84.59 | 64.55 | 29.19 | 15.91 | ||
Watermark Acc (%) | FedIPR | 100 | 100 | 100 | 100 | 100 | 54.12 | 32.15 | 15.42 | 10.37 | |
WAFFLE | 100 | 100 | 98.55 | 98.55 | 80.31 | 80.33 | 76.15 | 66.41 | 32.98 | ||
FedCRMW | 98.41 | 98.41 | 98.41 | 98.41 | 98.41 | 15.87 | 6.35 | 6.35 | 6.35 | ||
FedTracker | 100 | 100 | 99.11 | 90.16 | 82.13 | 10.52 | 17.73 | 10.92 | 7.28 | ||
Proposed | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | ||
mnist | Original Task Acc (%) | FedIPR | 99.56 | 99.46 | 99.44 | 99.31 | 99.12 | 98.92 | 65.72 | 32.13 | 15.17 |
WAFFLE | 99.15 | 99.12 | 99.04 | 98.99 | 97.21 | 76.52 | 32.13 | 15,91 | 10.11 | ||
FedCRMW | 98.81 | 98.81 | 98.8 | 98.35 | 91.48 | 22.15 | 11.35 | 11.35 | 11.35 | ||
FedTracker | 99.19 | 99.11 | 98.15 | 95.38 | 90.71 | 62.14 | 46.17 | 28.93 | 13.84 | ||
Proposed | 99.31 | 99.32 | 99.34 | 99.34 | 99.35 | 87.32 | 69.27 | 50.05 | 29.94 | ||
Watermark Acc (%) | FedIPR | 100 | 100 | 100 | 98.61 | 98.23 | 88.90 | 65.31 | 46.21 | 34.18 | |
WAFFLE | 100 | 100 | 100 | 100 | 91.34 | 80.23 | 50.42 | 36.11 | 4.87 | ||
FedCRMW | 100 | 100 | 100 | 88.89 | 49.21 | 23.81 | 7.94 | 7.94 | 7.94 | ||
FedTracker | 100 | 100 | 100 | 95.17 | 51.84 | 35.96 | 21.75 | 17.96 | 6.92 | ||
Proposed | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | ||
cifar-100 | Original Task Acc (%) | FedIPR | 64.21 | 64.17 | 64.01 | 45.18 | 45.75 | 32.12 | 20.21 | 15.14 | 7.19 |
WAFFLE | 60.19 | 60.06 | 59.88 | 59.66 | 31.21 | 16.90 | 8.11 | 4.18 | 2.32 | ||
FedCRMW | 60.26 | 60.11 | 60.06 | 41.29 | 41.11 | 20.95 | 18.33 | 7.31 | 4.92 | ||
FedTracker | 64.41 | 64.19 | 63.16 | 51.83 | 43.73 | 26.18 | 13.84 | 9.17 | 5.73 | ||
Proposed | 64.55 | 64.55 | 63.17 | 63.07 | 40.15 | 20,84 | 16.31 | 14.29 | 8.59 | ||
Watermark Acc (%) | FedIPR | 100 | 100 | 100 | 96.19 | 96.14 | 60.42 | 54.86 | 32.18 | 15.81 | |
WAFFLE | 100 | 100 | 100 | 100 | 49.16 | 31.58 | 19.28 | 4.13 | 0.00 | ||
FedCRMW | 100 | 100 | 100 | 79.15 | 48.92 | 37.95 | 20.73 | 15.75 | 6.95 | ||
FedTracker | 100 | 100 | 100 | 87.92 | 60.17 | 51.85 | 39.77 | 27.15 | 10.83 | ||
Proposed | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 |
Methods | Models | Datasets | Number of Ambiguous Watermarks | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | |||
Proposed | VGG-16 | CIFAR-10 | 99.34 | 99.24 | 99.25 | 99.31 | 99.31 | 99.25 | 99.34 | 99.3 | 99.34 | 99.35 |
MNIST | 99.49 | 99.44 | 99.46 | 99.45 | 99.55 | 99.54 | 99.56 | 99.55 | 99.47 | 99.48 | ||
CIFAR-100 | 99.08 | 99.06 | 99.14 | 99.04 | 99.16 | 99.13 | 99.11 | 99.05 | 99.10 | 99.05 | ||
ResNet-18 | CIFAR-10 | 99.53 | 99.44 | 99.46 | 99.51 | 99.54 | 99.56 | 99.45 | 99.5 | 99.5 | 99.48 | |
MNIST | 99.63 | 99.63 | 99.6 | 99.57 | 99.55 | 99.62 | 99.59 | 99.65 | 99.59 | 99.64 | ||
CIFAR-100 | 99.28 | 99.28 | 99.27 | 99.29 | 99.3 | 99.3 | 99.27 | 99.24 | 99.29 | 99.34 | ||
WAFFLE | VGG-16 | CIFAR-10 | 65.11 | 65.12 | 65.09 | 65.08 | 65.13 | 65.14 | 65.1 | 65.14 | 65.14 | 65.11 |
MNIST | 70.08 | 70.08 | 70.13 | 70.15 | 70.08 | 70.1 | 70.13 | 70.09 | 70.16 | 70.06 | ||
CIFAR-100 | 59.29 | 59.25 | 59.26 | 59.24 | 59.26 | 59.3 | 59.28 | 59.25 | 59.25 | 59.32 | ||
ResNet-18 | CIFAR-10 | 69.34 | 69.28 | 69.27 | 69.35 | 69.32 | 69.35 | 69.33 | 69.28 | 69.35 | 69.36 | |
MNIST | 73.78 | 73.79 | 73.81 | 73.75 | 73.78 | 73.79 | 73.79 | 73.81 | 73.83 | 73.85 | ||
CIFAR-100 | 62.78 | 62.78 | 62.77 | 62.81 | 62.77 | 62.81 | 62.83 | 62.83 | 62.85 | 62.82 | ||
FedIPR | VGG-16 | CIFAR-10 | 87.65 | 87.66 | 87.65 | 87.71 | 87.66 | 87.71 | 87.74 | 87.7 | 87.67 | 87.73 |
MNIST | 89.75 | 89.75 | 89.75 | 89.75 | 89.66 | 89.65 | 89.74 | 89.67 | 89.69 | 89.71 | ||
CIFAR-100 | 75.72 | 75.64 | 75.65 | 75.71 | 75.74 | 75.64 | 75.72 | 75.75 | 75.71 | 75.71 | ||
ResNet-18 | CIFAR-10 | 89.94 | 89.94 | 89.84 | 89.94 | 89.92 | 89.86 | 89.92 | 89.86 | 89.84 | 89.93 | |
MNIST | 91.95 | 91.87 | 91.96 | 91.86 | 91.85 | 91.9 | 91.85 | 91.84 | 91.91 | 91.96 | ||
CIFAR-100 | 80.95 | 80.91 | 80.85 | 80.89 | 80.9 | 80.92 | 80.88 | 80.89 | 80.86 | 80.95 | ||
FedCRMW | VGG-16 | CIFAR-10 | 75.35 | 75.43 | 75.37 | 75.38 | 75.36 | 75.37 | 75.45 | 75.42 | 75.34 | 75.42 |
MNIST | 79.56 | 79.57 | 79.65 | 79.56 | 79.59 | 79.58 | 79.61 | 79.61 | 79.62 | 79.61 | ||
CIFAR-100 | 71.61 | 71.64 | 71.62 | 71.59 | 71.65 | 71.62 | 71.63 | 71.63 | 71.57 | 71.58 | ||
ResNet-18 | CIFAR-10 | 87.56 | 87.65 | 87.65 | 87.63 | 87.54 | 87.58 | 87.56 | 87.65 | 87.63 | 87.6 | |
MNIST | 85.16 | 85.05 | 85.1 | 85.1 | 85.13 | 85.12 | 85.11 | 85.14 | 85.1 | 85.12 | ||
CIFAR-100 | 81.16 | 81.05 | 81.14 | 81.12 | 81.07 | 81.06 | 81.07 | 81.08 | 81.05 | 81.14 | ||
FedTracker | VGG-16 | CIFAR-10 | 91.46 | 91.46 | 91.55 | 91.55 | 91.49 | 91.46 | 91.56 | 91.5 | 91.51 | 91.53 |
MNIST | 90.94 | 90.92 | 90.91 | 90.91 | 90.91 | 90.92 | 90.93 | 90.86 | 90.87 | 90.87 | ||
CIFAR-100 | 93.85 | 93.84 | 93.91 | 93.85 | 93.86 | 93.87 | 93.9 | 93.87 | 93.89 | 93.92 | ||
ResNet-18 | CIFAR-10 | 96.12 | 96.06 | 96.08 | 96.15 | 96.07 | 96.06 | 96.16 | 96.08 | 96.08 | 96.15 | |
MNIST | 95.68 | 95.76 | 95.67 | 95.71 | 95.67 | 95.69 | 95.76 | 95.68 | 95.65 | 95.76 | ||
CIFAR-100 | 94.36 | 94.34 | 94.33 | 94.3 | 94.34 | 94.26 | 94.27 | 94.36 | 94.36 | 94.29 |
Model | Acc(%) | Datasets | Number of Experiments | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
10 | 20 | 30 | 40 | 50 | 60 | 70 | 80 | 90 | |||
VGG-16 | Watermark | CIFAR-10 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 |
MNIST | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | ||
CIFAR-100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | ||
Clean | CIFAR-10 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | |
MNIST | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | ||
CIFAR-100 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | ||
Ambiguous | CIFAR-10 | 63.92 | 64.61 | 60.51 | 63.77 | 61.26 | 62.82 | 60.97 | 60.73 | 63.99 | |
MNIST | 61.21 | 63.48 | 58.82 | 58.16 | 62.22 | 59.02 | 62.64 | 59.99 | 59.08 | ||
CIFAR-100 | 66.09 | 63.84 | 65.23 | 64.24 | 63.43 | 66.81 | 65.01 | 65.04 | 65.72 | ||
ResNet-18 | Watermark | CIFAR-10 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 |
MNIST | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | ||
CIFAR-100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | ||
Clean | CIFAR-10 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | |
MNIST | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | ||
CIFAR-100 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | ||
Ambiguous | CIFAR-10 | 62.37 | 62.51 | 63.46 | 63.22 | 67.91 | 64.11 | 62.09 | 65.84 | 64.23 | |
MNIST | 61.05 | 63.36 | 66.33 | 61.19 | 66.21 | 66.46 | 63.10 | 61.07 | 61.03 | ||
CIFAR-100 | 63.29 | 61.93 | 58.17 | 63.22 | 61.49 | 59.51 | 61.35 | 59.89 | 59.06 |
Methods | Datasets | Global Iteration (s) | Watermark Embedding (s) | Watermark Verification (ms) |
---|---|---|---|---|
Clean | CIFAR-10 | 257.01 | - | - |
MNIST | 233.04 | - | - | |
CIFAR-100 | 290.09 | - | - | |
Proposed | CIFAR-10 | 257.14 | 0.13 | 0.064 |
MNIST | 233.11 | 0.07 | 0.064 | |
CIFAR-100 | 290.17 | 0.08 | 0.062 | |
FedIPR | CIFAR-10 | 361.05 | 104.04 | 0.083 |
MNIST | 348.84 | 115.80 | 0.074 | |
CIFAR-100 | 398.94 | 108.85 | 0.091 | |
FedCRMW | CIFAR-10 | 324.75 | 67.74 | 0.073 |
MNIST | 293.16 | 60.12 | 0.071 | |
CIFAR-1006 | 343.61 | 52.70 | 0.077 | |
FedTracker | CIFAR-10 | 314.32 | 57.31 | 0.078 |
MNIST | 289.75 | 56.71 | 0.072 | |
CIFAR-100 | 327.16 | 37.07 | 0.081 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Chen, W.; Zhang, C.; Zhang, W.; Cai, J. Class-Hidden Client-Side Watermarking in Federated Learning. Entropy 2025, 27, 134. https://doi.org/10.3390/e27020134
Chen W, Zhang C, Zhang W, Cai J. Class-Hidden Client-Side Watermarking in Federated Learning. Entropy. 2025; 27(2):134. https://doi.org/10.3390/e27020134
Chicago/Turabian StyleChen, Weitong, Chi Zhang, Wei Zhang, and Jie Cai. 2025. "Class-Hidden Client-Side Watermarking in Federated Learning" Entropy 27, no. 2: 134. https://doi.org/10.3390/e27020134
APA StyleChen, W., Zhang, C., Zhang, W., & Cai, J. (2025). Class-Hidden Client-Side Watermarking in Federated Learning. Entropy, 27(2), 134. https://doi.org/10.3390/e27020134